I have been struggling to figure out the problem behind why my images sometimes end up flipping once the shader has run and output the result.
I take in position and normal images into my shader to perform basic lighting calculations. The position and normal images are correctly orientated but the resulting output seems to end up flipping. This also occurs with my SSAO pass where the SSAO pass flips the image but the blurring pass for the SSAO will then un-flip the image back to normal.
I have utilised
vec2(1.0 - uvCoords.x, uvCoords.y);
While this does invert the image back to normal, it also inverts keyboard input and mouse control and therefore not the ideal solution.
The shader outputs to a quad which is created as:
std::vector<Vertex> Quad = {
// pos // color //texcoords
{{-1.0f, -1.0f, 0.0f}, {1.0f, 0.0f, 0.0f}, {1.0f, 0.0f}},
{{1.0f, -1.0f, 0.0f}, {0.0f, 1.0f, 0.0f}, {0.0f, 0.0f}},
{{1.0f, 1.0f, 0.0f}, {0.0f, 0.0f, 1.0f}, {0.0f, 1.0f}},
{{-1.0f, 1.0f, 0.0f}, {1.0f, 1.0f, 0.0f}, {1.0f, 1.0f}},
{{-1.0f, -1.0f, 0.0f}, {1.0f, 0.0f, 0.0f}, {1.0f, 0.0f}},
{{1.0f, 1.0f, 0.0f}, {0.0f, 0.0f, 1.0f}, {0.0f, 1.0f}},
};
Lighting pass shader:
void main() {
vec3 FragPos = texture(gPosition, uvCoords).rgb;
vec3 Normal = texture(gNormal, uvCoords).rgb;
float ambientValue = 0.1;
vec3 ambient = ambientValue * light.LightColor.xyz;
vec3 lightingDirection = normalize(light.LightPosition.xyz - FragPos);
float diffCalc = max(dot(Normal, lightingDirection), 0.0);
vec3 diffuse = diffCalc * light.LightColor.xyz;
vec3 result = (ambient + diffuse) * light.ObjectColor.xyz;
outColor = vec4(result, 1.0);
}
I am showing the G-buffer normal in correct orientation and the resulting lighting pass output image which ends up flipping:
Seems your x axis in uv coord mesh is flipped
std::vector<Vertex> Quad = {
// pos // color //texcoords
{{-1.0f, -1.0f, 0.0f}, {1.0f, 0.0f, 0.0f}, {1.0f, 0.0f}},
{{1.0f, -1.0f, 0.0f}, {0.0f, 1.0f, 0.0f}, {0.0f, 0.0f}},
{{1.0f, 1.0f, 0.0f}, {0.0f, 0.0f, 1.0f}, {0.0f, 1.0f}},
{{-1.0f, 1.0f, 0.0f}, {1.0f, 1.0f, 0.0f}, {1.0f, 1.0f}},
{{-1.0f, -1.0f, 0.0f}, {1.0f, 0.0f, 0.0f}, {1.0f, 0.0f}},
{{1.0f, 1.0f, 0.0f}, {0.0f, 0.0f, 1.0f}, {0.0f, 1.0f}},
};
You can see that pos x = -1.0f maps to texcoord 1.0f and pos x 1.0f maps to 0.0f, so it indeed flips the x axis.
Layout should be
std::vector<Vertex> Quad = {
// pos // color //texcoords
{{-1.0f, -1.0f, 0.0f}, {1.0f, 0.0f, 0.0f}, {0.0f, 0.0f}},
{{1.0f, -1.0f, 0.0f}, {0.0f, 1.0f, 0.0f}, {1.0f, 0.0f}},
{{1.0f, 1.0f, 0.0f}, {0.0f, 0.0f, 1.0f}, {1.0f, 1.0f}},
{{-1.0f, 1.0f, 0.0f}, {1.0f, 1.0f, 0.0f}, {0.0f, 1.0f}},
{{-1.0f, -1.0f, 0.0f}, {1.0f, 0.0f, 0.0f}, {0.0f, 0.0f}},
{{1.0f, 1.0f, 0.0f}, {0.0f, 0.0f, 1.0f}, {1.0f, 1.0f}},
};
Related
I only managed to fix a triangle. This is the code but do not know what to do to get it as a cube.
I tried to fix the code but did not get the cube, so this code is before I destroyed it. Will try to get it to rotate too.
public class Triangle {
static final int VERTEX_POS_SIZE = 4;
static final int COLOR_SIZE = 4;
static final int VERTEX_POS_INDEX = 0;
static final int COLOR_INDEX = 1;
static final int VERTEX_POS_OFFSET = 0;
static final int COLOR_OFFSET = 0;
static final int VERTEX_ATTRIB_SIZE = VERTEX_POS_SIZE;
static final int COLOR_ATTRIB_SIZE = COLOR_SIZE;
private final int VERTEX_COUNT = triangleData.length / VERTEX_ATTRIB_SIZE;
private FloatBuffer vertexDataBuffer;
private FloatBuffer colorDataBuffer;
static float triangleData[] = { // in counterclockwise order:
0.0f, 0.0f, 0.0f, 1.0f, // top
1.0f, 0.0f, 0.0f, 1.0f, // bottom left
1.0f, 0.0f, -1.0f, 1.0f, // bottom right
0.0f, 0.0f, -1.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, -1.0f, 1.0f,
0.0f, 1.0f, -1.0f, 1.0f,
};
static float colorData[] = { // in counterclockwise order:
1.0f, 0.0f, 0.0f, 1.0f, // Red
0.0f, 1.0f, 0.0f, 1.0f, // Green
0.0f, 0.0f, 1.0f, 1.0f// Blue
};
// Set color with red, green, blue and alpha (opacity) values
float color[] = { 0.63671875f, 0.76953125f, 0.22265625f, 1.0f };
private final int mProgram;
private final String vertexShaderCode =
// This matrix member variable provides a hook to manipulate
// the coordinates of the objects that use this vertex shader
"attribute vec4 vPosition; \n" +
"attribute vec4 vColor; \n" +
"uniform mat4 uMVPMatrix;\n" +
"varying vec4 c; \n" +
"void main() { \n" +
" c = vColor; \n" +
// the matrix must be included as a modifier of gl_Position
// Note that the uMVPMatrix factor *must be first* in order
// for the matrix multiplication product to be correct.
" gl_Position = uMVPMatrix * vPosition;\n" +
"}";
private final String fragmentShaderCode =
"precision mediump float;\n" +
"varying vec4 c;\n" +
"void main() {\n" +
" gl_FragColor = c;\n" +
"}";
// Use to access and set the view transformation
private int mMVPMatrixHandle;
private int positionHandle;
private int colorHandle;
public Triangle() {
// initialize vertex byte buffer for shape coordinates
ByteBuffer bbv = ByteBuffer.allocateDirect(
// (number of coordinate values * 4 bytes per float)
triangleData.length * 4);
// use the device hardware's native byte order
bbv.order(ByteOrder.nativeOrder());
// create a floating point buffer from the ByteBuffer
vertexDataBuffer = bbv.asFloatBuffer();
// add the coordinates to the FloatBuffer
vertexDataBuffer.put(triangleData);
// set the buffer to read the first coordinate
vertexDataBuffer.position(0);
// initialize vertex byte buffer for shape coordinates
ByteBuffer bbc = ByteBuffer.allocateDirect(
// (number of coordinate values * 4 bytes per float)
colorData.length * 4);
// use the device hardware's native byte order
bbc.order(ByteOrder.nativeOrder());
// create a floating point buffer from the ByteBuffer
colorDataBuffer = bbc.asFloatBuffer();
// add the coordinates to the FloatBuffer
colorDataBuffer.put(colorData);
// set the buffer to read the first coordinate
colorDataBuffer.position(0);
int vertexShader = CGRenderer.loadShader(GLES20.GL_VERTEX_SHADER,
vertexShaderCode);
int fragmentShader = CGRenderer.loadShader(GLES20.GL_FRAGMENT_SHADER,
fragmentShaderCode);
// create empty OpenGL ES Program
mProgram = GLES20.glCreateProgram();
// add the vertex shader to program
GLES20.glAttachShader(mProgram, vertexShader);
// add the fragment shader to program
GLES20.glAttachShader(mProgram, fragmentShader);
// creates OpenGL ES program executables
GLES20.glLinkProgram(mProgram);
}
public void draw(float[] mvpMatrix) {
// Add program to OpenGL ES environment
GLES20.glUseProgram(mProgram);
// get handle to shape's transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix");
positionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition");
GLES20.glEnableVertexAttribArray(positionHandle);
// Prepare the triangle coordinate data
GLES20.glVertexAttribPointer(positionHandle, VERTEX_POS_SIZE,
GLES20.GL_FLOAT, false,
VERTEX_ATTRIB_SIZE * 4, vertexDataBuffer);
colorHandle = GLES20.glGetAttribLocation(mProgram, "vColor");
GLES20.glEnableVertexAttribArray(colorHandle);
GLES20.glVertexAttribPointer(colorHandle, COLOR_SIZE,
GLES20.GL_FLOAT, false,
COLOR_ATTRIB_SIZE * 4, colorDataBuffer);
// Pass the projection and view transformation to the shader
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
// Draw the triangle
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, VERTEX_COUNT);
// Disable vertex array
GLES20.glDisableVertexAttribArray(positionHandle);
GLES20.glDisableVertexAttribArray(colorHandle);
}
}
first I try to describe what I want:
I want a (qt 4.8 - based [can but must not]) Opengl-able context set in the background somewhere. Then the first layer has an Image with an transparent ROUND!! hole where you can see the OpenGl context. and on top of that static image there are buttons for some logic.
What I did:
I try to render a Svg with the QSvgRenderer in front of an opengl scenery.
The OpenGl Scenery is drawn in a QGraphicsView with an QGLWidget as Viewport.
int main(int argc, char **argv){
QApplication app(argc, argv);
GraphicsView view;
//trying to get the anitaliasing to work
view.setRenderHints( QPainter::Antialiasing | QPainter::SmoothPixmapTransform );
QGLWidget* glwid = new QGLWidget( QGLFormat( QGL::SampleBuffers | QGL::AlphaChannel | QGL::Rgba ));
//Set Multisampling
QGLFormat frm = glwid->format();
glwid->format().setSamples(4);
glwid->setFormat(frm);
//set the GlViewport
view.setViewport(glwid); //--> SvgRendering works with no anti aliasing but glRendering does.
//view.setViewport(new QWidget); //--> SvgRendering with Antialiasing works
view.setViewportUpdateMode(QGraphicsView::SmartViewportUpdate);
view.setScene(new OpenGLScene(view.rect()));
view.show();
return app.exec();
}
The Scene Looks like this
#include "openglscene.h"
#include "PotentioMeter.h"
#include <QtGui>
#include <QtOpenGL>
#include <GL/glu.h>
#ifndef GL_MULTISAMPLE
#define GL_MULTISAMPLE 0x809D
#endif
OpenGLScene::OpenGLScene(const QRectF &rect)
: m_backgroundColor(0, 170, 255)
, m_distance(1.4f)
{
//One or more Widgets And buttons with svg special Looking (because of scaling)
PotentioMeter* pm = new PotentioMeter( );
pm->setAttribute(Qt::WA_TranslucentBackground, true);
pm->resize(500,500);
addWidget(pm);
QPushButton *backgroundButton = new QPushButton(tr("Choose background color"));
connect(backgroundButton, SIGNAL(clicked()), this, SLOT(setBackgroundColor()));
backgroundButton->move(900,20);
addWidget(backgroundButton);
//Where should be the gl things drawn
rGlViewport = QRectF(100, 100,rect.width()/2,rect.height()/2);
timer = new QTimer(this);
connect(timer, SIGNAL(timeout()), this, SLOT(update()));
const int UPDATE_RATE_25HZ_IN_MSEC = 40;
timer->start(UPDATE_RATE_25HZ_IN_MSEC);
}
//dummy gl drawing function
void OpenGLScene::pyramid(QColor* color)
{
glScalef(0.5,0.5,0.5);
glPushMatrix();
glRotatef(++angle, 0.0, 1.0, 0.0);
glBegin( GL_TRIANGLES );
glColor4f(color->redF(),color->greenF(),color->blueF(),1);
glColor3f( 1.0f, 0.0f, 0.0f ); glVertex3f( 0.0f, 1.f, 0.0f );
glColor3f( 0.0f, 1.0f, 0.0f ); glVertex3f( -1.0f, -1.0f, 1.0f );
glColor3f( 0.0f, 0.0f, 1.0f ); glVertex3f( 1.0f, -1.0f, 1.0f);
glColor4f(color->redF(),color->greenF(),color->blueF(),1);
glColor3f( 1.0f, 0.0f, 0.0f ); glVertex3f( 0.0f, 1.0f, 0.0f);
glColor3f( 0.0f, 1.0f, 0.0f ); glVertex3f( -1.0f, -1.0f, 1.0f);
glColor3f( 0.0f, 0.0f, 1.0f ); glVertex3f( 0.0f, -1.0f, -1.0f);
glColor4f(color->redF(),color->greenF(),color->blueF(),1);
glColor3f( 1.0f, 0.0f, 0.0f ); glVertex3f( 0.0f, 1.0f, 0.0f);
glColor3f( 0.0f, 1.0f, 0.0f ); glVertex3f( 0.0f, -1.0f, -1.0f);
glColor3f( 0.0f, 0.0f, 1.0f ); glVertex3f( 1.0f, -1.0f, 1.0f);
glColor4f(color->redF(),color->greenF(),color->blueF(),1);
glColor3f( 1.0f, 0.0f, 0.0f ); glVertex3f( -1.0f, -1.0f, 1.0f);
glColor3f( 0.0f, 1.0f, 0.0f ); glVertex3f( 0.0f, -1.0f, -1.0f);
glColor3f( 0.0f, 0.0f, 1.0f ); glVertex3f( 1.0f, -1.0f, 1.0f);
glEnd();
glPopMatrix();
}
void
OpenGLScene::drawBackground(QPainter *painter, const QRectF & f)
{
//check painter
if (painter->paintEngine()->type() != QPaintEngine::OpenGL
&& painter->paintEngine()->type() != QPaintEngine::OpenGL2)
{
qWarning(
"OpenGLScene: drawBackground needs a QGLWidget to be set as viewport on the graphics view");
return;
}
//check viewport
if (f != myViewport)
{
myViewport = f;
}
//desperately trying to do antialiasing
painter->setRenderHint(QPainter::Antialiasing);
painter->setRenderHint(QPainter::HighQualityAntialiasing);
glClearColor(m_backgroundColor.redF() + .2, m_backgroundColor.greenF() - .2, m_backgroundColor.blueF() + .2, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glFrustum(-1, 1, -1, 1, -1000.0, 1000.0);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glEnable(GL_MULTISAMPLE);
glViewport(rGlViewport.x(), rGlViewport.y(), rGlViewport.width(),
rGlViewport.height());
//draw some gl stuff
pyramid(&m_backgroundColor);
glViewport(myViewport.x(), myViewport.y(), myViewport.width(),
myViewport.height());
glPopMatrix();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
}
void
OpenGLScene::setBackgroundColor()
{
const QColor color = QColorDialog::getColor(m_backgroundColor);
if (color.isValid())
{
m_backgroundColor = color;
}
}
GL but no AA:
No GL but AA:
The Big Problem is now that when trying to render OpenGl and the Svg at the same time, either Opengl is not rendered - because of the missing context and the svg is rendered perfectly, or the opengl is rendered and the svg is not antialiased correctly.
So any ideas to get the antaliasing randering correctly?
The complete Project (ready for eclipse - made with cmake)
and Screenshots
and a prebuild executable
is in the link here:
https://www.dropbox.com/s/giqynqfuq2yl8li/SampleCode.zip
I'm very new to OpenGL so this might be a very silly question.
What I'm using: Visual Studio 2010, MFC framework, Windows 7.
What I'm trying to do: A MFC control based on OpenGL that can show some files preview with zooming, panning and rotation.
What I've already done:
Derived a class from CButton to handle all events and having a easy access to CClientDC;
Following a less or more recent tutorial I've encapsulated the creation of the OpenGL Device in a class.I don't know if it's good or if I've understood what I've done, but it works.
Let me show you some code:
int COpenGLCtrl::OnCreate( LPCREATESTRUCT lpCreateStruct )
{
if( CButton::OnCreate( lpCreateStruct ) == -1 )
return -1;
m_pDC = new CClientDC( this );
// This is my encapsulated device.
m_GLDevice.Create( m_pDC->m_hDC );
InitGL();
return 0;
}
void COpenGLCtrl::InitGL( void )
{
glShadeModel( GL_SMOOTH );
glClearColor( 0.0f, 0.0f, 0.0f, 0.0f );
}
// Drawing inside button.
void COpenGLCtrl::OnPaint( void )
{
m_GLDevice.MakeCurrent();
DrawGLScene();
}
// This function is called before DrawGLScene the first time the control is being redrawed.
void COpenGLCtrl::ZoomOut( int nVal )
{
glMatrixMode( GL_PROJECTION );
glLoadIdentity();
glOrtho( 0.0f, nVal * 1.0f, 0.0f, nVal * 1.0f, nVal * 1.0f, 0.0f );
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
}
void COpenGLCtrl::DrawGLScene( void )
{
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glLoadIdentity();
glRotatef( m_fRotation_X, 1.0f, .0f, 0.0f );
glRotatef( m_fRotation_Y, 0.0f, 1.0f, 0.0f );
glTranslatef( m_fTraslation_X, m_fTraslation_Y, 0.0f );
glScalef( 1.0f, 1.0f, 0.0f );
glBegin( GL_TRIANGLES );
glColor3f( 1.0f, 0.0f, 0.0f );
glVertex3f( 0.0f, 0.0f, 0.0f );
glColor3f( 0.0f, 1.0f, 0.0f );
glVertex3f( 0.5f, 1.0f, 0.0f );
glColor3f( 0.0f, 0.0f, 1.0f );
glVertex3f( 1.0f, 0.0f, 0.0f );
glEnd();
SwapBuffers( m_pDC->m_hDC );
}
All this code is sufficient to draw a small triangle in my control with low-left vertex centered in OpenGL axis origin. All good, I showed you all this stuff to see if there are some error somewhere.
Now I want to pan around my triangle: from what I've understood, I've to modify the value of m_fTranslation_X and m_fTranslation_Y. I've decided to pan the view when the user holds down the mouse left button and moves it around:
void COpenGLCtrl::OnMouseMove( UINT nFlags, CPoint point )
{
int nRes;
CString strOut;
GLfloat winX, winY, winZ;
GLint viewport[4];
GLdouble vPos[3], modelview[16], projection[16];
// Converting CPoint MFC coordinates to OpenGL viewport coordinates.
m_p2dPosMouse = point;
glGetDoublev( GL_MODELVIEW_MATRIX, modelview );
glGetDoublev( GL_PROJECTION_MATRIX, projection );
glGetIntegerv( GL_VIEWPORT, viewport );
winX = (GLfloat)point.x;
winY = (GLfloat)viewport[3] - (GLfloat)point.y;
glReadPixels( point.x, int( winY ), 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ );
nRes = gluUnProject( winX, winY, winZ, modelview, projection, viewport, &vPos[0], &vPos[1], &vPos[2] );
m_vPosMouse[0] = (GLfloat)vPos[0];
m_vPosMouse[1] = (GLfloat)vPos[1];
m_vPosMouse[2] = (GLfloat)vPos[2];
if( nFlags == MK_LBUTTON )
{
if( !m_bTrasla )
{
ATLTRACE2( _T("\nInizio traslazione\n\n") );
memcpy( m_vPosMouse_ult, m_vPosMouse, 3 * sizeof( GLfloat ) );
m_p2dUltPos = point;
m_bTrasla = true;
}
/*OPENGL-WAY
m_fTraslation_X += m_vPosMouse[0] - m_vPosMouse_ult[0];
m_fTraslation_Y += m_vPosMouse[1] - m_vPosMouse_ult[1];
*/
/*MFC-WAY*/
m_fTraslation_X += ( point.x - m_p2dUltPos.x ) / 5000.0f;
m_fTraslation_Y += ( m_p2dUltPos.y - point.y ) / 5000.0f;
CString strBuf;
strBuf.Format( _T("%f - %f || CPoint.x: %ld CPoint.y: %ld\n"), m_fTraslation_X, m_fTraslation_Y, point.x, point.y );
ATLTRACE2( strBuf );
///////////////////////
// EDIT: Added in edit to solve MFC-WAY
m_p2dUltPos = point;
///////////////////////
memcpy( m_vPosMouse_ult, m_vPosMouse, 3 * sizeof( GLfloat ) );
}
if( m_CB_PosAgg.Valid() )
m_CB_PosAgg.Execute();
Invalidate();
}
In short words: I keep track of mouse last position and calculate how much I've moved on X and on Y. These differences are my traslations.
Finally my problems:
If I use OpenGL coordinate (commented and labelled OPENGL-WAY) the pan is smooth and precise but shakes. This is because even if my CPoint coordinates increase (or decrease) and so the difference, the OpenGL don't. Here's some values:
m_fTraslate_X m_fTraslate_Y CPoint.x CPoint.y
0.000000 0.000000 274 328
0.014377 0.141573 283 265
0.020767 0.152809 287 260
0.001597 0.002247 288 259
0.007987 0.020225 292 251
0.025559 0.164045 295 246
0.027157 0.170786 296 243
As you can see, while mouse control position increases (in MFC Y axis is inverted), at step 3 the values for X-axis are suddenly decreased from 0.02 to 0.002 and then back to 0.02! Why this?
EDIT: solved in my answer. If I use MFC coordinate, labelled MFC-WAY, I've got no shakes but a strong inertia. Sometimes happen that if I start moving my mouse to the right (assuming no up-down) then change the direction back to left, my view continues to move to the right. Where is the error?
Thanks in advance, hope you won't ban me for this noobish essay.
Problem 1)
The first problem was that when the view gets invalidated and then redrawed, the traslation has still to be performed! Solution:
if( nFlags == MK_RBUTTON && m_bTrasla )
{
// Converting MFC coordinates to OpenGL System.
ConversioneCoordinateMFC_A_OPENGL( m_vPosMouse_ult, m_p2dUltPos );
m_fTraslation_X += m_vPosMouse[0] - m_vPosMouse_ult[0];
m_fTraslation_Y += m_vPosMouse[1] - m_vPosMouse_ult[1];
Problem 2)
MFC-STYLE panning is not updating the last position of mouse pointer. Adding this line:
m_p2dUltPos = point; solved the problem.
I am trying to do texture map a quad geometry object generated by createTexturedQuadGeometry with a texture that I load from an image. I then add this drawable to a node, add that node to root and render the hierarchy.
The code below is how I do it. The code compiles and runs but I only get a blank black screen instead of the specified image. Can someone please point out what is wrong?
int main(int argc, char** argv)
{
osg::ref_ptr<osg::Group> root = new osg::Group;
osg::ref_ptr<osg::Texture2D> testTexture = new osg::Texture2D;
osg::ref_ptr<osg::Image> testImage = osgDB::readImageFile("testImage.png");
assert(testImage.valid());
int viewWidth = testImage->s();
int viewHeight = testImage->t();
testTexture->setImage(testImage);
osg::ref_ptr<osg::Geometry> pictureQuad = osg::createTexturedQuadGeometry(osg::Vec3(0.0f,0.0f,0.0f),
osg::Vec3(viewWidth,0.0f,0.0f),
osg::Vec3(0.0f,0.0f,viewHeight),
0.0f,
viewWidth,
viewHeight,
1.0f);
pictureQuad->getOrCreateStateSet()->setTextureAttributeAndModes(0, testTexture.get());
pictureQuad->getOrCreateStateSet()->setMode(GL_DEPTH_TEST, osg::StateAttribute::ON);
osg::ref_ptr<osg::Geode> textureHolder = new osg::Geode();
textureHolder->addDrawable(pictureQuad);
root->addChild(textureHolder);
osgViewer::Viewer viewer;
viewer.setSceneData(root.get());
viewer.run();
}
So, I happened to figure out the error.
createTexturedQuadGeometry expects normalised texture coordinates.
So,
osg::ref_ptr<osg::Geometry> pictureQuad = osg::createTexturedQuadGeometry(osg::Vec3(0.0f,0.0f,0.0f),
osg::Vec3(viewWidth,0.0f,0.0f),
osg::Vec3(0.0f,0.0f,viewHeight),
0.0f,
0.0f,
1.0f,
1.0f);
solves the problem.
I've been working on ping pong shading and had thought that I had cracked it after my previous question. However, with further shader knowledge it looks like while I'm able to run the shader on FBO A and on FBO B, the output from A is not used as the source to B. In other words I'm not binding it correctly.
The code I'm using is below. The output of the second shader is showing colour based output but the first shader sets the data to grayscale. So consquently I know this isn't working as required.
I'd be grateful for any (yet further!!) assistance.
Code below,
Cheers,
Simon
- (void) PingPong:(CVImageBufferRef)cameraframe;
{
// Standard texture coords for the rendering pipeline
static const GLfloat squareVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
static const GLfloat textureVertices[] = {
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
0.0f, 0.0f,
};
if (context)
{
[EAGLContext setCurrentContext:context];
}
// Create two textures with the same configuration
int bufferHeight = CVPixelBufferGetHeight(cameraframe);
int bufferWidth = CVPixelBufferGetWidth(cameraframe);
// texture 1
glGenTextures(1, &tex_A);
glBindTexture(GL_TEXTURE_2D, tex_A);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// Using BGRA extension to pull in video frame data directly
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraframe));
// Texture 2
glGenTextures(1, &tex_B);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// Bind framebuffer A
glBindFramebuffer(GL_FRAMEBUFFER, fbo_A);
glViewport(0, 0, backingWidth, backingHeight);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex_A);
// Update uniform values
glUniform1i(uniforms[UNIFORM_VIDEOFRAME], 0);
// Update attribute values.
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, textureVertices);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
// Use the first shader
glUseProgram(greyscaleProgram);
// Render a quad
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Use the second shader
glUseProgram(program);
// Bind framebuffer B
glBindFramebuffer(GL_FRAMEBUFFER, fbo_B);
// Bind texture A and setup texture units for the shader
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex_A);
// Render output of FBO b is texture B
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex_B, 0);
// Update uniform values
glUniform1i(uniforms[UNIFORM_VIDEOFRAME], 0);
// Update attribute values.
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, textureVertices);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
// Render a quad
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Render the whole thing
glBindRenderbuffer(GL_RENDERBUFFER, renderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
glDeleteTextures(1, &tex_A);
glDeleteTextures(1, &tex_B);
}
I think what might be happening is that you are still rendering into frame buffer memory and not texture memory. iirc glFramebufferTexture2D does not act as a resolve/copy but instead binds the framebuffer to the texture so that future render operations get written to the texture. You could have more problems however I'm pretty sure that your call to glFramebufferTexture2D should happen directly after your first call to glBindFramebuffer. This may not be your only problem, but it seems to be a significant one.