Multiple Render Targets (MRT) and OSG - openscenegraph

Folks,
I have studied about FBO, RTT and MRT to include this feature in my application, however I faced with some problems/doubts I did not find answers/tips during my search. Follows below the description of my scenario. I´ll be grateful if anyone can help me.
What I want to do?
Attach two render textures (for color and depth buffers) to the same camera;
Display only the color buffer in the post render camera;
Read images from depth and color buffer in a final draw callback;
Write collected float images in disk.
What have I got so far?
Allow rendering for color or depth buffers separately, but not both on the same camera;
Display the color buffer in the post render camera;
Read color or depth buffer in the final draw callback;
Write collected image (color or depth) in disk - only images as GL_UNSIGNED_BYTE. The following error is presented:
Error writing file ./Test-depth.png: Warning: Error in writing to "./Test-depth.png".
What are the doubts? (help!)
How can I properly render both textures (color and depth buffer) in the same camera?
How can I properly read both depth and color buffers in the final draw callback?
During image writing in disk, why the error is presented only for images as GL_FLOAT, not for GL_UNSIGNED_BYTE?
Is the render texture attached to an osg::Geode mandatory or optional in this process? Do I need to create two osg::Geode (one for each buffers), or only one osg::Geode for both?
Please take a look in my current source code (what am I doing wrong here?):
// OSG includes
#include <osgDB/ReadFile>
#include <osgDB/WriteFile>
#include <osgViewer/Viewer>
#include <osg/Camera>
#include <osg/Geode>
#include <osg/Geometry>
#include <osg/Texture2D>
struct SnapImage : public osg::Camera::DrawCallback {
SnapImage(osg::GraphicsContext* gc) {
_image = new osg::Image;
_depth = new osg::Image;
if (gc->getTraits()) {
int width = gc->getTraits()->width;
int height = gc->getTraits()->height;
_image->allocateImage(width, height, 1, GL_RGBA, GL_FLOAT);
_depth->allocateImage(width, height, 1, GL_DEPTH_COMPONENT, GL_FLOAT);
}
}
virtual void operator () (osg::RenderInfo& renderInfo) const {
osg::Camera* camera = renderInfo.getCurrentCamera();
osg::GraphicsContext* gc = camera->getGraphicsContext();
if (gc->getTraits() && _image.valid()) {
int width = gc->getTraits()->width;
int height = gc->getTraits()->height;
_image->readPixels(0, 0, width, height, GL_RGBA, GL_FLOAT);
_depth->readPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT);
osgDB::writeImageFile(*_image, "./Test-color.png");
osgDB::writeImageFile(*_depth, "./Test-depth.png");
}
}
osg::ref_ptr<osg::Image> _image;
osg::ref_ptr<osg::Image> _depth;
};
osg::Camera* setupMRTCamera( osg::ref_ptr<osg::Camera> camera, std::vector<osg::Texture2D*>& attachedTextures, int w, int h ) {
camera->setClearColor( osg::Vec4() );
camera->setClearMask( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
camera->setRenderTargetImplementation( osg::Camera::FRAME_BUFFER_OBJECT );
camera->setRenderOrder( osg::Camera::PRE_RENDER );
camera->setViewport( 0, 0, w, h );
osg::Texture2D* tex = new osg::Texture2D;
tex->setTextureSize( w, h );
tex->setSourceType( GL_FLOAT );
tex->setSourceFormat( GL_RGBA );
tex->setInternalFormat( GL_RGBA32F_ARB );
tex->setResizeNonPowerOfTwoHint( false );
tex->setFilter( osg::Texture2D::MIN_FILTER, osg::Texture2D::LINEAR );
tex->setFilter( osg::Texture2D::MAG_FILTER, osg::Texture2D::LINEAR );
attachedTextures.push_back( tex );
camera->attach( osg::Camera::COLOR_BUFFER, tex );
tex = new osg::Texture2D;
tex->setTextureSize( w, h );
tex->setSourceType( GL_FLOAT );
tex->setSourceFormat( GL_DEPTH_COMPONENT );
tex->setInternalFormat( GL_DEPTH_COMPONENT32 );
tex->setResizeNonPowerOfTwoHint( false );
attachedTextures.push_back( tex );
camera->attach( osg::Camera::DEPTH_BUFFER, tex );
return camera.release();
}
int main() {
osg::ref_ptr< osg::Group > root( new osg::Group );
root->addChild( osgDB::readNodeFile( "cow.osg" ) );
unsigned int winW = 800;
unsigned int winH = 600;
osgViewer::Viewer viewer;
viewer.setUpViewInWindow( 0, 0, winW, winH );
viewer.setSceneData( root.get() );
viewer.realize();
// setup MRT camera
std::vector<osg::Texture2D*> attachedTextures;
osg::Camera* mrtCamera ( viewer.getCamera() );
setupMRTCamera( mrtCamera, attachedTextures, winW, winH );
// set RTT textures to quad
osg::Geode* geode( new osg::Geode );
geode->addDrawable( osg::createTexturedQuadGeometry(
osg::Vec3(-1,-1,0), osg::Vec3(2.0,0.0,0.0), osg::Vec3(0.0,2.0,0.0)) );
geode->getOrCreateStateSet()->setTextureAttributeAndModes( 0, attachedTextures[0] );
geode->getOrCreateStateSet()->setMode( GL_LIGHTING, osg::StateAttribute::OFF );
geode->getOrCreateStateSet()->setMode( GL_DEPTH_TEST, osg::StateAttribute::OFF );
// configure postRenderCamera to draw fullscreen textured quad
osg::Camera* postRenderCamera( new osg::Camera );
postRenderCamera->setClearMask( 0 );
postRenderCamera->setRenderTargetImplementation( osg::Camera::FRAME_BUFFER, osg::Camera::FRAME_BUFFER );
postRenderCamera->setReferenceFrame( osg::Camera::ABSOLUTE_RF );
postRenderCamera->setRenderOrder( osg::Camera::POST_RENDER );
postRenderCamera->setViewMatrix( osg::Matrixd::identity() );
postRenderCamera->setProjectionMatrix( osg::Matrixd::identity() );
postRenderCamera->addChild( geode );
root->addChild(postRenderCamera);
// setup the callback
SnapImage* finalDrawCallback = new SnapImage(viewer.getCamera()->getGraphicsContext());
mrtCamera->setFinalDrawCallback(finalDrawCallback);
return (viewer.run());
}
Thanks in advance,
Rômulo Cerqueira

Related

Why is my stretchblt images losing colours?

I am having some difficulties in correctly populating a CListCtrl with thumbnails of monitor displays.
On the right of my CDialog I have a static control and I render the image on a white canvas like this:
void CCenterCursorOnScreenDlg::OnDrawItem(int nIDCtl, LPDRAWITEMSTRUCT lpDrawItemStruct)
{
if (nIDCtl == IDC_STATIC_MONITOR && !m_imgPreview.IsNull())
{
// Set the mode
SetStretchBltMode(lpDrawItemStruct->hDC, HALFTONE);
// Wipe the canvas
FillRect(lpDrawItemStruct->hDC, &lpDrawItemStruct->rcItem, static_cast<HBRUSH>(GetStockObject(WHITE_BRUSH)));
// Get canvas rectangle
const CRect rectCanvas(lpDrawItemStruct->rcItem);
// Calculate ratio factors
const float nRatioImage = m_imgPreview.GetWidth() / static_cast<float>(m_imgPreview.GetHeight());
const float nRatioCanvas = rectCanvas.Width() / static_cast<float>(rectCanvas.Height());
// Calculate new rectangle size
// Account for portrait images (negative values)
CRect rectDraw = rectCanvas;
if (nRatioImage > nRatioCanvas)
rectDraw.SetRect(0, 0, rectDraw.right, static_cast<int>(rectDraw.right / nRatioImage));
else if (nRatioImage < nRatioCanvas)
rectDraw.SetRect(0, 0, static_cast<int>((rectDraw.bottom * nRatioImage)), rectDraw.bottom);
// Add a margin
rectDraw.DeflateRect(5, 5);
// Move to center
const CSize ptOffset = rectCanvas.CenterPoint() - rectDraw.CenterPoint();
rectDraw.OffsetRect(ptOffset);
// Add a black frame
FrameRect(lpDrawItemStruct->hDC, &lpDrawItemStruct->rcItem, static_cast<HBRUSH>(GetStockObject(BLACK_BRUSH)));
// Draw
m_imgPreview.Draw(lpDrawItemStruct->hDC, rectDraw);
return;
}
CDialogEx::OnDrawItem(nIDCtl, lpDrawItemStruct);
}
The above works beautifully:
But I have problems with the CListCtrl versions of the images. For instance, I am losing the colouring as you can see.
My CImageList is created like this:
m_ImageListThumb.Create(THUMBNAIL_WIDTH, THUMBNAIL_HEIGHT, ILC_COLOR32, 0, 1);
m_ListThumbnail.SetImageList(&m_ImageListThumb, LVSIL_NORMAL);
I then create all the thumbnails by calling DrawThumbnails() in OnInitDialog:
void CCenterCursorOnScreenDlg::DrawThumbnails()
{
int monitorIndex = 0;
m_ListThumbnail.SetRedraw(FALSE);
for (auto& strMonitor : m_monitors.strMonitorNames)
{
CImage img;
CreateMonitorThumbnail(monitorIndex, img, true);
CBitmap* pImage = new CBitmap();
pImage->Attach((HBITMAP)img);
m_ImageListThumb.Add(pImage, nullptr);
CString strMonitorDesc = m_monitors.strMonitorNames.at(monitorIndex);
strMonitorDesc.AppendFormat(L" (Screen %d)", monitorIndex + 1);
m_ListThumbnail.InsertItem(monitorIndex, strMonitorDesc, monitorIndex);
monitorIndex++;
delete pImage;
}
m_ListThumbnail.SetRedraw(TRUE);
}
The CreateMonitorThumbnail function:
BOOL CCenterCursorOnScreenDlg::CreateMonitorThumbnail(const int iMonitorIndex, CImage &rImage, bool bSmall)
{
const CRect rcCapture = m_monitors.rcMonitors.at(iMonitorIndex);
// destroy the currently contained bitmap to create a new one
rImage.Destroy();
auto nWidth = rcCapture.Width();
auto nHeight = rcCapture.Height();
if (bSmall)
{
nWidth = THUMBNAIL_WIDTH;
nHeight = THUMBNAIL_HEIGHT;
}
// create bitmap and attach it to this object
if (!rImage.Create(nWidth, nHeight, 32, 0))
{
AfxMessageBox(L"Cannot create image!", MB_ICONERROR);
return FALSE;
}
// create virtual screen DC
CDC dcScreen;
dcScreen.CreateDC(_T("DISPLAY"), nullptr, nullptr, nullptr);
// copy the contents from the virtual screen DC
BOOL bRet = FALSE;
if (bSmall)
{
CRect rt(0, 0, nWidth, nHeight);
//::FillRect(rImage.GetDC(), rt, static_cast<HBRUSH>(GetStockObject(WHITE_BRUSH)));
bRet = ::StretchBlt(rImage.GetDC(), 0, 0,
nWidth,
nHeight,
dcScreen.m_hDC,
rcCapture.left,
rcCapture.top,
rcCapture.Width(),
rcCapture.Height(), SRCCOPY | CAPTUREBLT);
}
else
{
bRet = ::BitBlt(rImage.GetDC(), 0, 0,
rcCapture.Width(),
rcCapture.Height(),
dcScreen.m_hDC,
rcCapture.left,
rcCapture.top, SRCCOPY | CAPTUREBLT);
}
// do cleanup and return
dcScreen.DeleteDC();
rImage.ReleaseDC();
return bRet;
}
Ideally I want to have exactly the same kind of visual image as on the right, but obviously resized down. How do I fix this?
I simplified the converting from CImage to CBitmap but it made no difference:
void CCenterCursorOnScreenDlg::DrawThumbnails()
{
int monitorIndex = 0;
// Stop redrawing the CListCtrl
m_ListThumbnail.SetRedraw(FALSE);
// Loop monitor info
for (auto& strMonitor : m_monitors.strMonitorNames)
{
// Create the thumbnail image
CImage monitorThumbnail;
CreateMonitorThumbnail(monitorIndex, monitorThumbnail, true);
// Convert it to a CBitmap
CBitmap* pMonitorThumbnailBitmap = CBitmap::FromHandle(monitorThumbnail);
// Add the CBitmap to the CImageList
m_ImageListThumb.Add(pMonitorThumbnailBitmap, nullptr);
// Build the caption description
CString strMonitorDesc = m_monitors.strMonitorNames.at(monitorIndex);
strMonitorDesc.AppendFormat(L" (Screen %d)", monitorIndex + 1);
// Add the item to the CListCtrl
m_ListThumbnail.InsertItem(monitorIndex, strMonitorDesc, monitorIndex);
monitorIndex++;
}
// Start redrawiung the CListCtrl again
m_ListThumbnail.SetRedraw(TRUE);
}
If I change my code to pass false for the last parameter, so that it uses the original captured images without scaling down:
The colours are god there, so it is when I do:
if (bSmall)
{
CRect rt(0, 0, nWidth, nHeight);
//::FillRect(rImage.GetDC(), rt, static_cast<HBRUSH>(GetStockObject(WHITE_BRUSH)));
bRet = ::StretchBlt(rImage.GetDC(), 0, 0,
nWidth,
nHeight,
dcScreen.m_hDC,
rcCapture.left,
rcCapture.top,
rcCapture.Width(),
rcCapture.Height(), SRCCOPY | CAPTUREBLT);
}
that it messes up.
My issue was did not have anything to do with OnDrawItem. I simply included that to indicate how the image on the right was being rendered. I thought it may helped as background information. But it has probably confused the question and I may take it out in the long run!
Based on the comments I was reminded about SetStretchBltMode which was missing from CreateMonitorThumbnail. So, I now have this function:
BOOL CCenterCursorOnScreenDlg::CreateMonitorThumbnail(const int iMonitorIndex, CImage &rImage, bool bResizeAsThumbnail)
{
const CRect rcCapture = m_monitors.rcMonitors.at(iMonitorIndex);
// Destroy the currently contained bitmap to create a new one
rImage.Destroy();
// Massage the dimensions as we want a thumbnail
auto nWidth = rcCapture.Width();
auto nHeight = rcCapture.Height();
if (bResizeAsThumbnail)
{
nWidth = m_iThumbnailWidth;
auto dRatio = rcCapture.Width() / nWidth;
//nHeight = m_iThumbnailHeight;
nHeight = static_cast<int>(rcCapture.Height() / dRatio);
if (nHeight > m_iThumbnailHeight)
{
AfxMessageBox(L"Need to investigate!");
}
}
// Create bitmap and attach it to this object
if (!rImage.Create(nWidth, nHeight, 32, 0))
{
AfxMessageBox(L"Cannot create image!", MB_ICONERROR);
return FALSE;
}
// Create virtual screen DC
CDC dcScreen;
dcScreen.CreateDC(L"DISPLAY", nullptr, nullptr, nullptr);
// Copy (or resize) the contents from the virtual screen DC
BOOL bRet = FALSE;
auto dcImage = rImage.GetDC();
if (bResizeAsThumbnail)
{
// Set the mode first!
SetStretchBltMode(dcImage, COLORONCOLOR);
CPen penBlack;
penBlack.CreatePen(PS_SOLID, 3, RGB(0, 0, 0));
::Rectangle(dcImage, 0, 0, m_iThumbnailWidth, m_iThumbnailHeight);
int iTop = (m_iThumbnailHeight - nHeight) / 2;
// Copy (and resize)
bRet = ::StretchBlt(dcImage, 0, iTop,
nWidth,
nHeight,
dcScreen.m_hDC,
rcCapture.left,
rcCapture.top,
rcCapture.Width(),
rcCapture.Height(), SRCCOPY | CAPTUREBLT);
}
else
{
// Copy
bRet = ::BitBlt(dcImage, 0, 0,
rcCapture.Width(),
rcCapture.Height(),
dcScreen.m_hDC,
rcCapture.left,
rcCapture.top, SRCCOPY | CAPTUREBLT);
}
// Do cleanup and return
dcScreen.DeleteDC();
rImage.ReleaseDC();
return bRet;
}
That was the key to getting the thumbnail showing with the right colours:

how to apply gradient effect on Image GDI

How can I apply gradient effect on image like this image in c#. I have a transparent image with black drawing I want to apply 2 color gradient on the image is this possible in gdi?
Here is the effect i want to achieve
http://postimg.org/image/ikz1ie7ip/
You create a PathGradientBrush and then you draw your texts with that brush.
To create a bitmap filled with a gradient brush you could do something like:
public Bitmap GradientImage(int width, int height, Color color1, Color color2, float angle)
{
var r = new Rectangle(0, 0, width, height);
var bmp = new Bitmap(width, height);
using (var brush = new LinearGradientBrush(r, color1, color2, angle, true))
using (var g = Graphics.FromImage(bmp))
g.FillRectangle(brush, r);
return bmp;
}
So now that you have an image with the gradient in it, all you have to do is to bring over the alpha channel from your original image into the newly created image. We can take the transferOneARGBChannelFromOneBitmapToAnother function from a blog post I once wrote:
public enum ChannelARGB
{
Blue = 0,
Green = 1,
Red = 2,
Alpha = 3
}
public static void transferOneARGBChannelFromOneBitmapToAnother(
Bitmap source,
Bitmap dest,
ChannelARGB sourceChannel,
ChannelARGB destChannel )
{
if ( source.Size!=dest.Size )
throw new ArgumentException();
Rectangle r = new Rectangle( Point.Empty, source.Size );
BitmapData bdSrc = source.LockBits( r, ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb );
BitmapData bdDst = dest.LockBits( r, ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb );
unsafe
{
byte* bpSrc = (byte*)bdSrc.Scan0.ToPointer();
byte* bpDst = (byte*)bdDst.Scan0.ToPointer();
bpSrc += (int)sourceChannel;
bpDst += (int)destChannel;
for ( int i = r.Height * r.Width; i > 0; i-- )
{
*bpDst = *bpSrc;
bpSrc += 4;
bpDst += 4;
}
}
source.UnlockBits( bdSrc );
dest.UnlockBits( bdDst );
}
Now you could do something like:
var newImage = GradientImage( original.Width, original.Height, Color.Yellow, Color.Blue, 45 );
transferOneARGBChannelFromOneBitmapToAnother( original, newImage, ChannelARGB.Alpha, ChannelARGB.Alpha );
And there you are. :-)

How to update Texture2D in pixel shader every frame (in D3D10)?

Using D3D10, I am drawing a 2d rectangle and want to fill it with a texture (bitmap) that should change a few times every second (like displaying video).
I am using a shader effect, with a Texture2D variable, and trying to update a ID3D10EffectShaderResourceVariable and redraw the mesh.
My actual usage will be by copying bitmaps from memory, and using UpdateSubresource.
But it did not work, so I reduced it to test switching between two DDS images.
The result is that it draws the first image as expected, but keeps drawing it instead of switching between the two images.
I am new to D3D. Can you explain if this method can work at all, or suggest the right way to do it.
The shader effect:
Texture2D txDiffuse;
SamplerState samLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
struct VS_INPUT
{
float4 Pos : POSITION;
float2 Tex : TEXCOORD;
};
struct PS_INPUT
{
float4 Pos : SV_POSITION;
float2 Tex : TEXCOORD0;
};
PS_INPUT VS( VS_INPUT input )
{
PS_INPUT output = (PS_INPUT)0;
output.Pos = input.Pos;
output.Tex = input.Tex;
return output;
}
float4 PS( PS_INPUT input) : SV_Target
{
return txDiffuse.Sample( samLinear, input.Tex );
}
technique10 Render
{
pass P0
{
SetVertexShader( CompileShader( vs_4_0, VS() ) );
SetGeometryShader( NULL );
SetPixelShader( CompileShader( ps_4_0, PS() ) );
}
}
Code (skipped many parts):
ID3D10ShaderResourceView* g_pTextureRV = NULL;
ID3D10EffectShaderResourceVariable* g_pDiffuseVariable = NULL;
D3DX10CreateEffectFromResource(gInstance, MAKEINTRESOURCE(IDR_RCDATA1), NULL, NULL, NULL, "fx_4_0", dwShaderFlags, 0, device, NULL, NULL, &g_pEffect, NULL, NULL);
g_pTechnique = g_pEffect->GetTechniqueByName( "Render" );
g_pDiffuseVariable = g_pEffect->GetVariableByName( "txDiffuse" )->AsShaderResource();
// this part is called on Frame render:
device->CreateRenderTargetView( backBuffer, NULL, &rtView);
device->ClearRenderTargetView( rtView, ClearColor );
if(g_pTextureRV != NULL) {
g_pTextureRV->Release();
g_pTextureRV = NULL;
}
D3DX10CreateShaderResourceViewFromFile(device, pCurrentDDSFilePath, NULL, NULL, &g_pTextureRV, NULL );
g_pDiffuseVariable->SetResource( g_pTextureRV );
D3D10_TECHNIQUE_DESC techDesc;
g_pTechnique->GetDesc( &techDesc );
for( UINT p = 0; p < techDesc.Passes; ++p )
{
g_pTechnique->GetPassByIndex( p )->Apply( 0 );
direct2dDrawingContext->dev->Draw( 6, 0 );
}
// ... present the current back buffer
One solution, not necessarily the best, but one that doesn't use custom shaders, follows (I wrote it in C# / Managed DirectX but it should be easy to transcode.)
Bitmap bmp; //the bitmap that you will use to update the texture
Texture tex; //the texture that DirectX will render
void Render()
{
//render some stuff
bmp = GetNextTextureFrame(); //whatever you do to update your bitmap
Surface s = tex.GetSurfaceLevel(0);
Graphics g = s.GetGraphics();
//IntPtr hdc = g.GetHdc();
//BitBlt(hdc, 0, 0, bmp.Width, bmp.Height, bmpHdc, 0, 0, 0xcc0020);
g.DrawImageUnscaled(bmp, 0, 0);
g.ReleaseHdc(hdc);
s.ReleaseGraphics();
device.SetTexture(0, tex);
//now render your primitives
//render some more stuff
//present
}
The commented out lines are the way I actually did it, using an hBitmap and DC with BitBlt, because it's faster than GDI+. A lot of people will probably tell you that the above is a bad way to do it, because of all the memory locking that has to occur, and they're probably right. But I was able to achieve 30fps with multiple 1920x1080 textures, so regardless of whether it's proper, it works.

Why do my OpenGL object get rendered with the same texture?

UPDATE : By the help of #datenwolf I know that the return value of gluBuild2DMipmaps is not the pointer to the texture, instead it's only an error code. I forgot to call glGenTextures and glBindTexture. Look in the method LoadTextureRaw in this answer
I have a problem when rendering multiple object, which each having their own Texture file definition, that is, they all draw the same texture. I create a class hierarchy, CDrawObject->CBall. In the CDrawObject, I define this :
public ref class CDrawObject
{
protected:
BYTE * dataTexture;
GLuint * texture;
public:
String ^ filename;
CDrawObject(void);
virtual void draw();
void LoadTextureRaw();
};
In the LoadTextureDraw(), I define this:
void CDrawObject::LoadTextureRaw()
{
//GLuint texture;
if(!filename) return;
if(filename->Equals("")) return;
texture = new GLuint;
System::Drawing::Bitmap ^ bitmap = gcnew Bitmap(filename);
int h = bitmap->Height;
int w = bitmap->Width;
int s = w * h;
dataTexture = new BYTE[s * 3];
System::Drawing::Rectangle rect = System::Drawing::Rectangle(0,0,w,h);
System::Drawing::Imaging::BitmapData ^ bitmapData =
bitmap->LockBits(rect,System::Drawing::Imaging::ImageLockMode::ReadWrite , System::Drawing::Imaging::PixelFormat::Format24bppRgb);
::memcpy(dataTexture,bitmapData->Scan0.ToPointer(),s*3);
/* old code
bitmap->UnlockBits(bitmapData);
pin_ptr<GLuint*> pt = &texture;//pin managed pointer, to be unmanaged
**pt = gluBuild2DMipmaps(GL_TEXTURE_2D, 3, w,h,GL_BGR_EXT, GL_UNSIGNED_BYTE, dataTexture);
*/
//new code : working fine this way. I forgot to call glGenTextures and glBindTexture
bitmap->UnlockBits(bitmapData);
pin_ptr<GLuint*> pt = &texture;//pin managed pointer, to be unmanaged... a must here :)
glEnable(GL_TEXTURE_2D);
glGenTextures(1,*pt);
glBindTexture(GL_TEXTURE_2D,**pt);
gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, w,h,GL_BGR_EXT, GL_UNSIGNED_BYTE, dataTexture);
}
And as the CBall:draw itself, I define this :
void CBall::draw(){
glLoadIdentity();
if(texture!=NULL && !filename->Equals(""))
{
glEnable(GL_TEXTURE_2D);
pin_ptr<GLuint*> pt = &texture;
glBindTexture(GL_TEXTURE_2D,**pt);
}
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glShadeModel(GL_SMOOTH);
glEnable(GL_DEPTH_TEST);
glEnable(GL_NORMALIZE);
glTranslatef(this->x,this->y,this->z);
glRotatef(this->sudut_rotasi_x,1,0,0);
glRotatef(this->sudut_rotasi_y,0,1,0);
glRotatef(this->sudut_rotasi_z,0,0,1);
glScalef(this->x_scale,this->y_scale,this->z_scale);
GLUquadricObj *q = gluNewQuadric();
gluQuadricNormals(q, GL_SMOOTH);
gluQuadricTexture(q, GL_TRUE);
gluSphere(q, r, 32, 16);
glFlush();
glDisable(GL_TEXTURE_2D);
}
The problem is, when I draw two (or more) ball object, they all drawn using the same texture. I already debug the code, and for each object, they all have different texture variable. Here is a snapshot of my code that draw those balls :
...
CBall ^ ball = gcnew CBall();
ball->x=Convert::ToSingle(r->GetAttribute("x"));
ball->y=Convert::ToSingle(r->GetAttribute("y"));
ball->z=Convert::ToSingle(r->GetAttribute("z"));
ball->r=Convert::ToSingle(r->GetAttribute("r"));
ball->filename=r->GetAttribute("filename");
ball->LoadTextureRaw();
addGraphic(id, ball);
...
Those code were called from a read XML file method.
What did I do wrong with this OpenGL Code?
Your problem is, that gluBuild2DMipmaps doesn't return the texture name, but a error code. You need to create a texture name separately.
Try this:
public ref class CDrawObject
{
protected:
GLuint texture; // just a GLuint, not a pointer!
public:
String ^ filename;
CDrawObject(void);
virtual void draw();
void LoadTextureRaw();
};
Change LoadTextureRaw a bit:
void CDrawObject::LoadTextureRaw()
{
if(!filename)
return;
if(filename->Equals(""))
return;
System::Drawing::Bitmap ^ bitmap = gcnew Bitmap(filename);
int h = bitmap->Height;
int w = bitmap->Width;
int s = w * h;
System::Drawing::Rectangle rect = System::Drawing::Rectangle(0,0,w,h);
System::Drawing::Imaging::BitmapData ^ bitmapData =
bitmap->LockBits( rect,
System::Drawing::Imaging::ImageLockMode::ReadWrite,
System::Drawing::Imaging::PixelFormat::Format24bppRgb );
// This is the important part: We generate a texture name and...
glGenTextures(1, &texture); // this should not require a pin_ptr, after all were in the middle of a member function of the class, so the garbage collector will not kick in.
// ...bind it, causing creation of a (yet uninitialized) texture object
glBindTexture(GL_TEXTURE_2D, texture);
GLint error = gluBuild2DMipmaps(
GL_TEXTURE_2D,
GL_RGB, // this should be a valid OpenGL token, not the number of components!
w, h,
GL_BGR_EXT, GL_UNSIGNED_BYTE,
bitmapData->Scan0.ToPointer() );
bitmap->UnlockBits(bitmapData);
}
Finally draw (which I rearranged a little)
void CBall::draw(){
glLoadIdentity();
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glShadeModel(GL_SMOOTH);
glEnable(GL_DEPTH_TEST);
glEnable(GL_NORMALIZE);
glTranslatef(this->x,this->y,this->z);
glRotatef(this->sudut_rotasi_x,1,0,0);
glRotatef(this->sudut_rotasi_y,0,1,0);
glRotatef(this->sudut_rotasi_z,0,0,1);
glScalef(this->x_scale,this->y_scale,this->z_scale);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
GLUquadricObj *q = gluNewQuadric();
gluQuadricNormals(q, GL_SMOOTH);
gluQuadricTexture(q, GL_TRUE);
gluSphere(q, r, 32, 16);
// glFlush is not required
glDisable(GL_TEXTURE_2D);
}

Clipplanes, vertex shaders and hardware vertex processing in Direct3D 9

I have an issue with clipplanes in my application that I can reproduce in a sample from DirectX SDK (February 2010).
I added a clipplane to the HLSLwithoutEffects sample:
...
D3DXPLANE g_Plane( 0.0f, 1.0f, 0.0f, 0.0f );
...
void SetupClipPlane(const D3DXMATRIXA16 & view, const D3DXMATRIXA16 & proj)
{
D3DXMATRIXA16 m = view * proj;
D3DXMatrixInverse( &m, NULL, &m );
D3DXMatrixTranspose( &m, &m );
D3DXPLANE plane;
D3DXPlaneNormalize( &plane, &g_Plane );
D3DXPLANE clipSpacePlane;
D3DXPlaneTransform( &clipSpacePlane, &plane, &m );
DXUTGetD3D9Device()->SetClipPlane( 0, clipSpacePlane );
}
void CALLBACK OnFrameMove( double fTime, float fElapsedTime, void* pUserContext )
{
// Update the camera's position based on user input
g_Camera.FrameMove( fElapsedTime );
// Set up the vertex shader constants
D3DXMATRIXA16 mWorldViewProj;
D3DXMATRIXA16 mWorld;
D3DXMATRIXA16 mView;
D3DXMATRIXA16 mProj;
mWorld = *g_Camera.GetWorldMatrix();
mView = *g_Camera.GetViewMatrix();
mProj = *g_Camera.GetProjMatrix();
mWorldViewProj = mWorld * mView * mProj;
g_pConstantTable->SetMatrix( DXUTGetD3D9Device(), "mWorldViewProj", &mWorldViewProj );
g_pConstantTable->SetFloat( DXUTGetD3D9Device(), "fTime", ( float )fTime );
SetupClipPlane( mView, mProj );
}
void CALLBACK OnFrameRender( IDirect3DDevice9* pd3dDevice, double fTime, float fElapsedTime, void* pUserContext )
{
// If the settings dialog is being shown, then
// render it instead of rendering the app's scene
if( g_SettingsDlg.IsActive() )
{
g_SettingsDlg.OnRender( fElapsedTime );
return;
}
HRESULT hr;
// Clear the render target and the zbuffer
V( pd3dDevice->Clear( 0, NULL, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, D3DCOLOR_ARGB( 0, 45, 50, 170 ), 1.0f, 0 ) );
// Render the scene
if( SUCCEEDED( pd3dDevice->BeginScene() ) )
{
pd3dDevice->SetVertexDeclaration( g_pVertexDeclaration );
pd3dDevice->SetVertexShader( g_pVertexShader );
pd3dDevice->SetStreamSource( 0, g_pVB, 0, sizeof( D3DXVECTOR2 ) );
pd3dDevice->SetIndices( g_pIB );
pd3dDevice->SetRenderState( D3DRS_CLIPPLANEENABLE, D3DCLIPPLANE0 );
V( pd3dDevice->DrawIndexedPrimitive( D3DPT_TRIANGLELIST, 0, 0, g_dwNumVertices,
0, g_dwNumIndices / 3 ) );
pd3dDevice->SetRenderState( D3DRS_CLIPPLANEENABLE, 0 );
RenderText();
V( g_HUD.OnRender( fElapsedTime ) );
V( pd3dDevice->EndScene() );
}
}
When I rotate the camera I have different visual results when using hardware and software vertex processing.
In software vertex processing mode or when using the reference device the clipping plane works fine as expected. In hardware mode it seems to rotate with the camera.
If I remove the call to RenderText(); from OnFrameRender then hardware rendering also works fine. Further debugging reveals that the problem is in ID3DXFont::DrawText.
I have this issue in Windows Vista and Windows 7 but not in Windows XP. I tested the code with the latest NVidia and ATI drivers in all three OSes on different PCs.
Is it a DirectX issue? Or incorrect usage of clipplanes?
Thanks
Igor
Well that suggests that something in RenderText is changing the clip plane. It could well be a driver "optimisation" that is hurting you.
Consider just setting the clip plane immediately before you do the DrawIndexedPrimitive.

Resources