I have an issue with clipplanes in my application that I can reproduce in a sample from DirectX SDK (February 2010).
I added a clipplane to the HLSLwithoutEffects sample:
...
D3DXPLANE g_Plane( 0.0f, 1.0f, 0.0f, 0.0f );
...
void SetupClipPlane(const D3DXMATRIXA16 & view, const D3DXMATRIXA16 & proj)
{
D3DXMATRIXA16 m = view * proj;
D3DXMatrixInverse( &m, NULL, &m );
D3DXMatrixTranspose( &m, &m );
D3DXPLANE plane;
D3DXPlaneNormalize( &plane, &g_Plane );
D3DXPLANE clipSpacePlane;
D3DXPlaneTransform( &clipSpacePlane, &plane, &m );
DXUTGetD3D9Device()->SetClipPlane( 0, clipSpacePlane );
}
void CALLBACK OnFrameMove( double fTime, float fElapsedTime, void* pUserContext )
{
// Update the camera's position based on user input
g_Camera.FrameMove( fElapsedTime );
// Set up the vertex shader constants
D3DXMATRIXA16 mWorldViewProj;
D3DXMATRIXA16 mWorld;
D3DXMATRIXA16 mView;
D3DXMATRIXA16 mProj;
mWorld = *g_Camera.GetWorldMatrix();
mView = *g_Camera.GetViewMatrix();
mProj = *g_Camera.GetProjMatrix();
mWorldViewProj = mWorld * mView * mProj;
g_pConstantTable->SetMatrix( DXUTGetD3D9Device(), "mWorldViewProj", &mWorldViewProj );
g_pConstantTable->SetFloat( DXUTGetD3D9Device(), "fTime", ( float )fTime );
SetupClipPlane( mView, mProj );
}
void CALLBACK OnFrameRender( IDirect3DDevice9* pd3dDevice, double fTime, float fElapsedTime, void* pUserContext )
{
// If the settings dialog is being shown, then
// render it instead of rendering the app's scene
if( g_SettingsDlg.IsActive() )
{
g_SettingsDlg.OnRender( fElapsedTime );
return;
}
HRESULT hr;
// Clear the render target and the zbuffer
V( pd3dDevice->Clear( 0, NULL, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, D3DCOLOR_ARGB( 0, 45, 50, 170 ), 1.0f, 0 ) );
// Render the scene
if( SUCCEEDED( pd3dDevice->BeginScene() ) )
{
pd3dDevice->SetVertexDeclaration( g_pVertexDeclaration );
pd3dDevice->SetVertexShader( g_pVertexShader );
pd3dDevice->SetStreamSource( 0, g_pVB, 0, sizeof( D3DXVECTOR2 ) );
pd3dDevice->SetIndices( g_pIB );
pd3dDevice->SetRenderState( D3DRS_CLIPPLANEENABLE, D3DCLIPPLANE0 );
V( pd3dDevice->DrawIndexedPrimitive( D3DPT_TRIANGLELIST, 0, 0, g_dwNumVertices,
0, g_dwNumIndices / 3 ) );
pd3dDevice->SetRenderState( D3DRS_CLIPPLANEENABLE, 0 );
RenderText();
V( g_HUD.OnRender( fElapsedTime ) );
V( pd3dDevice->EndScene() );
}
}
When I rotate the camera I have different visual results when using hardware and software vertex processing.
In software vertex processing mode or when using the reference device the clipping plane works fine as expected. In hardware mode it seems to rotate with the camera.
If I remove the call to RenderText(); from OnFrameRender then hardware rendering also works fine. Further debugging reveals that the problem is in ID3DXFont::DrawText.
I have this issue in Windows Vista and Windows 7 but not in Windows XP. I tested the code with the latest NVidia and ATI drivers in all three OSes on different PCs.
Is it a DirectX issue? Or incorrect usage of clipplanes?
Thanks
Igor
Well that suggests that something in RenderText is changing the clip plane. It could well be a driver "optimisation" that is hurting you.
Consider just setting the clip plane immediately before you do the DrawIndexedPrimitive.
Related
I saw some solutions that accessed the THREE.BoxGeometry faces like that:
var geometry = new THREE.BoxGeometry(n,n,n)
let faces = geometry.faces;
for (let face of faces) {
face.color.set(Math.random() * 0xffffff);
}
But it seems that the faces-array doesn't exist in the current Three.js version r129 (Uncaught TypeError: faces is not iterable).
How can I achieve an easy coloring of the six cube faces?
Try it like so:
let camera, scene, renderer, mesh;
init();
animate();
function init() {
camera = new THREE.PerspectiveCamera( 70, window.innerWidth / window.innerHeight, 0.01, 10 );
camera.position.z = 1;
scene = new THREE.Scene();
const geometry = new THREE.BoxGeometry( 0.2, 0.2, 0.2 ).toNonIndexed();
// vertexColors must be true so vertex colors can be used in the shader
const material = new THREE.MeshBasicMaterial( { vertexColors: true } );
// generate color data for each vertex
const positionAttribute = geometry.getAttribute( 'position' );
const colors = [];
const color = new THREE.Color();
for ( let i = 0; i < positionAttribute.count; i += 3 ) {
color.set( Math.random() * 0xffffff );
// define the same color for each vertex of a triangle
colors.push( color.r, color.g, color.b );
colors.push( color.r, color.g, color.b );
colors.push( color.r, color.g, color.b );
}
// define the new attribute
geometry.setAttribute( 'color', new THREE.Float32BufferAttribute( colors, 3 ) );
mesh = new THREE.Mesh( geometry, material );
scene.add( mesh );
renderer = new THREE.WebGLRenderer( { antialias: true } );
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
}
function animate() {
requestAnimationFrame( animate );
mesh.rotation.x += 0.01;
mesh.rotation.y += 0.02;
renderer.render( scene, camera );
}
body {
margin: 0;
}
<script src="https://cdn.jsdelivr.net/npm/three#0.129.0/build/three.js"></script>
The previous geometry format was removed from the core with r125. More information in the following post at the three.js forum.
https://discourse.threejs.org/t/three-geometry-will-be-removed-from-core-with-r125/22401
Folks,
I have studied about FBO, RTT and MRT to include this feature in my application, however I faced with some problems/doubts I did not find answers/tips during my search. Follows below the description of my scenario. I´ll be grateful if anyone can help me.
What I want to do?
Attach two render textures (for color and depth buffers) to the same camera;
Display only the color buffer in the post render camera;
Read images from depth and color buffer in a final draw callback;
Write collected float images in disk.
What have I got so far?
Allow rendering for color or depth buffers separately, but not both on the same camera;
Display the color buffer in the post render camera;
Read color or depth buffer in the final draw callback;
Write collected image (color or depth) in disk - only images as GL_UNSIGNED_BYTE. The following error is presented:
Error writing file ./Test-depth.png: Warning: Error in writing to "./Test-depth.png".
What are the doubts? (help!)
How can I properly render both textures (color and depth buffer) in the same camera?
How can I properly read both depth and color buffers in the final draw callback?
During image writing in disk, why the error is presented only for images as GL_FLOAT, not for GL_UNSIGNED_BYTE?
Is the render texture attached to an osg::Geode mandatory or optional in this process? Do I need to create two osg::Geode (one for each buffers), or only one osg::Geode for both?
Please take a look in my current source code (what am I doing wrong here?):
// OSG includes
#include <osgDB/ReadFile>
#include <osgDB/WriteFile>
#include <osgViewer/Viewer>
#include <osg/Camera>
#include <osg/Geode>
#include <osg/Geometry>
#include <osg/Texture2D>
struct SnapImage : public osg::Camera::DrawCallback {
SnapImage(osg::GraphicsContext* gc) {
_image = new osg::Image;
_depth = new osg::Image;
if (gc->getTraits()) {
int width = gc->getTraits()->width;
int height = gc->getTraits()->height;
_image->allocateImage(width, height, 1, GL_RGBA, GL_FLOAT);
_depth->allocateImage(width, height, 1, GL_DEPTH_COMPONENT, GL_FLOAT);
}
}
virtual void operator () (osg::RenderInfo& renderInfo) const {
osg::Camera* camera = renderInfo.getCurrentCamera();
osg::GraphicsContext* gc = camera->getGraphicsContext();
if (gc->getTraits() && _image.valid()) {
int width = gc->getTraits()->width;
int height = gc->getTraits()->height;
_image->readPixels(0, 0, width, height, GL_RGBA, GL_FLOAT);
_depth->readPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT);
osgDB::writeImageFile(*_image, "./Test-color.png");
osgDB::writeImageFile(*_depth, "./Test-depth.png");
}
}
osg::ref_ptr<osg::Image> _image;
osg::ref_ptr<osg::Image> _depth;
};
osg::Camera* setupMRTCamera( osg::ref_ptr<osg::Camera> camera, std::vector<osg::Texture2D*>& attachedTextures, int w, int h ) {
camera->setClearColor( osg::Vec4() );
camera->setClearMask( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
camera->setRenderTargetImplementation( osg::Camera::FRAME_BUFFER_OBJECT );
camera->setRenderOrder( osg::Camera::PRE_RENDER );
camera->setViewport( 0, 0, w, h );
osg::Texture2D* tex = new osg::Texture2D;
tex->setTextureSize( w, h );
tex->setSourceType( GL_FLOAT );
tex->setSourceFormat( GL_RGBA );
tex->setInternalFormat( GL_RGBA32F_ARB );
tex->setResizeNonPowerOfTwoHint( false );
tex->setFilter( osg::Texture2D::MIN_FILTER, osg::Texture2D::LINEAR );
tex->setFilter( osg::Texture2D::MAG_FILTER, osg::Texture2D::LINEAR );
attachedTextures.push_back( tex );
camera->attach( osg::Camera::COLOR_BUFFER, tex );
tex = new osg::Texture2D;
tex->setTextureSize( w, h );
tex->setSourceType( GL_FLOAT );
tex->setSourceFormat( GL_DEPTH_COMPONENT );
tex->setInternalFormat( GL_DEPTH_COMPONENT32 );
tex->setResizeNonPowerOfTwoHint( false );
attachedTextures.push_back( tex );
camera->attach( osg::Camera::DEPTH_BUFFER, tex );
return camera.release();
}
int main() {
osg::ref_ptr< osg::Group > root( new osg::Group );
root->addChild( osgDB::readNodeFile( "cow.osg" ) );
unsigned int winW = 800;
unsigned int winH = 600;
osgViewer::Viewer viewer;
viewer.setUpViewInWindow( 0, 0, winW, winH );
viewer.setSceneData( root.get() );
viewer.realize();
// setup MRT camera
std::vector<osg::Texture2D*> attachedTextures;
osg::Camera* mrtCamera ( viewer.getCamera() );
setupMRTCamera( mrtCamera, attachedTextures, winW, winH );
// set RTT textures to quad
osg::Geode* geode( new osg::Geode );
geode->addDrawable( osg::createTexturedQuadGeometry(
osg::Vec3(-1,-1,0), osg::Vec3(2.0,0.0,0.0), osg::Vec3(0.0,2.0,0.0)) );
geode->getOrCreateStateSet()->setTextureAttributeAndModes( 0, attachedTextures[0] );
geode->getOrCreateStateSet()->setMode( GL_LIGHTING, osg::StateAttribute::OFF );
geode->getOrCreateStateSet()->setMode( GL_DEPTH_TEST, osg::StateAttribute::OFF );
// configure postRenderCamera to draw fullscreen textured quad
osg::Camera* postRenderCamera( new osg::Camera );
postRenderCamera->setClearMask( 0 );
postRenderCamera->setRenderTargetImplementation( osg::Camera::FRAME_BUFFER, osg::Camera::FRAME_BUFFER );
postRenderCamera->setReferenceFrame( osg::Camera::ABSOLUTE_RF );
postRenderCamera->setRenderOrder( osg::Camera::POST_RENDER );
postRenderCamera->setViewMatrix( osg::Matrixd::identity() );
postRenderCamera->setProjectionMatrix( osg::Matrixd::identity() );
postRenderCamera->addChild( geode );
root->addChild(postRenderCamera);
// setup the callback
SnapImage* finalDrawCallback = new SnapImage(viewer.getCamera()->getGraphicsContext());
mrtCamera->setFinalDrawCallback(finalDrawCallback);
return (viewer.run());
}
Thanks in advance,
Rômulo Cerqueira
I am developing a small program that load 3d models using assimp, but it does not render the model. At first I thought that vertices and indices were not loaded correctly but this is not the case ( I printed on a txt file vertices and indices). I think that the probem might be with the position of the model and camera. The application does not return any error, it runs properly.
Vertex Struct:
struct Vertex {
XMFLOAT3 position;
XMFLOAT2 texture;
XMFLOAT3 normal;
};
Input layout:
D3D12_INPUT_ELEMENT_DESC inputLayout[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 },
{ "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D12_APPEND_ALIGNED_ELEMENT, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 }
};
Vertices, texcoords, normals and indices loader:
model = new ModelMesh();
std::vector<XMFLOAT3> positions;
std::vector<XMFLOAT3> normals;
std::vector<XMFLOAT2> texCoords;
std::vector<unsigned int> indices;
model->LoadMesh("beast.x", positions, normals,
texCoords, indices);
// Create vertex buffer
if (positions.size() == 0)
{
MessageBox(0, L"Vertices vector is empty.",
L"Error", MB_OK);
}
Vertex* vList = new Vertex[positions.size()];
for (size_t i = 0; i < positions.size(); i++)
{
Vertex vert;
XMFLOAT3 pos = positions[i];
vert.position = XMFLOAT3(pos.x, pos.y, pos.z);
XMFLOAT3 norm = normals[i];
vert.normal = XMFLOAT3(norm.x, norm.y, norm.z);
XMFLOAT2 tex = texCoords[i];
vert.texture = XMFLOAT2(tex.x, tex.y);
vList[i] = vert;
}
int vBufferSize = sizeof(vList);
Build of the camera and views:
XMMATRIX tmpMat = XMMatrixPerspectiveFovLH(45.0f*(3.14f/180.0f), (float)Width / (float)Height, 0.1f, 1000.0f);
XMStoreFloat4x4(&cameraProjMat, tmpMat);
// set starting camera state
cameraPosition = XMFLOAT4(0.0f, 2.0f, -4.0f, 0.0f);
cameraTarget = XMFLOAT4(0.0f, 0.0f, 0.0f, 0.0f);
cameraUp = XMFLOAT4(0.0f, 1.0f, 0.0f, 0.0f);
// build view matrix
XMVECTOR cPos = XMLoadFloat4(&cameraPosition);
XMVECTOR cTarg = XMLoadFloat4(&cameraTarget);
XMVECTOR cUp = XMLoadFloat4(&cameraUp);
tmpMat = XMMatrixLookAtLH(cPos, cTarg, cUp);
XMStoreFloat4x4(&cameraViewMat, tmpMat);
cube1Position = XMFLOAT4(0.0f, 0.0f, 0.0f, 0.0f);
XMVECTOR posVec = XMLoadFloat4(&cube1Position);
tmpMat = XMMatrixTranslationFromVector(posVec);
XMStoreFloat4x4(&cube1RotMat, XMMatrixIdentity());
XMStoreFloat4x4(&cube1WorldMat, tmpMat);
Update function :
XMStoreFloat4x4(&cube1WorldMat, worldMat);
XMMATRIX viewMat = XMLoadFloat4x4(&cameraViewMat); // load view matrix
XMMATRIX projMat = XMLoadFloat4x4(&cameraProjMat); // load projection matrix
XMMATRIX wvpMat = XMLoadFloat4x4(&cube1WorldMat) * viewMat * projMat; // create wvp matrix
XMMATRIX transposed = XMMatrixTranspose(wvpMat); // must transpose wvp matrix for the gpu
XMStoreFloat4x4(&cbPerObject.wvpMat, transposed); // store transposed wvp matrix in constant buffer
memcpy(cbvGPUAddress[frameIndex], &cbPerObject, sizeof(cbPerObject));
VERTEX SHADER:
struct VS_INPUT
{
float4 pos : POSITION;
float2 tex: TEXCOORD;
float3 normal : NORMAL;
};
struct VS_OUTPUT
{
float4 pos: SV_POSITION;
float2 tex: TEXCOORD;
float3 normal: NORMAL;
};
cbuffer ConstantBuffer : register(b0)
{
float4x4 wvpMat;
};
VS_OUTPUT main(VS_INPUT input)
{
VS_OUTPUT output;
output.pos = mul(input.pos, wvpMat);
return output;
}
Hope it is a long code to read but I don't understand what is going wrong with this code. Hope somebody can help me.
A few things to try/check:
Make your background clear color grey. That way, if you are drawing black triangles you will see them.
Turn backface culling off in the rendering state, in case your triangles are back to front.
Turn depth test off in the rendering state.
Turn off alpha blending.
You don't show your pixel shader, but try writing a constant color to see if your lighting calculation is broken.
Use NVIDIA's nSight tool, or the Visual Studio Graphics debugger to see what your graphics pipeline is doing.
Those are usually the things I try first...
I know that there are a lot of topics about memory leak but I tried the solutions and it still doesn't work.
I am working on this example
So I have
materialPano=new THREE.MeshFaceMaterial( materials );
materialPano.needsUpdate=true;
mesh = new THREE.Mesh( new THREE.CubeGeometry( 400, 400, 400, 7, 7, 7 ), materialPano );
I am adding some functionality to change the texture when I click on a button.
The problem is that the previous texture isn't deleted and the used memory increased at each new texture.
So when I click on the button, it executes a function which does:
materials = [loadTexture(myNewTexture1 ), loadTexture( myNewTexture2),loadTexture( myNewTexture3 ),loadTexture( myNewTexture4 ),loadTexture( myNewTexture5), loadTexture( myNewTexture6)];
myNewTexureK is the new image file which changes with the button. And I update the mesh material.
materialPano.materials=materials;
mesh.material=materialPano;
The problem is that I don't know how to delete the previous texture. I tried a lot of things like that :
for(var k=0;k<materials.length;k++){
materials[k].deallocate();
scene.remove(materials[k]);
renderer.deallocateTexture(materials[k]);
renderer.deallocateMaterial(materials[k]);
renderer.deallocateObject(materials[k]);
delete materials[k];
materials[k]=null;
}
renderer.deallocateMaterial(materials);
renderer.deallocateObject(materials);
delete materials;
materials=null;
And here I do materials=[loadTexture(newTexture,...)];
And I changed the loadTexture function like that :
function loadTexture( path ) {
var texture = new THREE.Texture( texture_placeholder );
var material = new THREE.MeshBasicMaterial( { map: texture, overdraw: true } );
var image = new Image();
image.onload = function () {
texture.needsUpdate = true;
material.map.image = this;
render();
};
image.src = path;
texture.deallocate();//new line
renderer.deallocateTexture( texture );//new line
return material;
}
But it doesn't delete anything!
And without any modification of the example, I noticed that when I refresh the page, memory increases too, so there is a memory leak in the example?
Is there a way to really delete texture to avoid memory leak?
Thanks a lot!
Nobody has a solution? :(
I edit the message to try to be more precise.
I have :
var mesh;
function loadTexture( path ) {
var texture = new THREE.Texture( texture_placeholder );
var material = new THREE.MeshBasicMaterial( { map: texture, overdraw: true } );
var image = new Image();
image.onload = function () {
texture.needsUpdate = true;
material.map.image = this;
render();
};
image.src = path;
texture.deallocate();
renderer.deallocateTexture( texture );
return material;
}
function init(){
//some initializations, create scene, webgl renderer, ...
var materiales = [
loadTexture( '1.jpg' ),
loadTexture( '2.jpg'),
loadTexture( '3.jpg' ),
loadTexture( '4.jpg' ),
loadTexture( '5.jpg'),
loadTexture( '6.jpg' )
];
mesh = new THREE.Mesh( new THREE.CubeGeometry( 400, 400, 400, 7, 7, 7 ),new THREE.MeshFaceMaterial( materiales ) );
scene.add( mesh );
}
When I click on a button, I do :
updateTexture(){
var materiales = [
loadTexture( 'new1.jpg' ),
loadTexture( 'new2.jpg'),
loadTexture( 'new3.jpg' ),
loadTexture( 'new4.jpg' ),
loadTexture( 'new5.jpg'),
loadTexture( 'new6.jpg' )
];
mesh.material=new THREE.MeshFaceMaterial( materiales );
}
The problem is that the memory increases at each click. With only that code, it's normal, there is nothing which deletes the previous mesh.material. But I tried a lot of things like :
mesh.deallocate(mesh.material);
mesh.geometry.deallocate(mesh.material);
scene.deallocate(mesh.material);
renderer.deallocateMaterial(mesh.material);
renderer.deallocateTexture(mesh.material);
renderer.deallocateObject(mesh.material);
scene.remove(mesh.material);
And the memory still increases. I precise that I work on Firefox v 17.0.1. The leak also appears on Chrome.
try adding texture.deallocate() if you are using r51 - r53
Here is the modification I made to http://mrdoob.github.com/three.js/examples/webgl_panorama_equirectangular.html
I added the loadTexture function from http://mrdoob.github.com/three.js/examples/canvas_geometry_panorama.html, the button to execute the changeTexture() function.
<!DOCTYPE html>
<head>
<title>three.js webgl - equirectangular panorama demo</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, user-scalable=no, minimum-scale=1.0, maximum-scale=1.0">
<style>
body {
background-color: #000000;
margin: 0px;
overflow: hidden;
}
#info {
position: absolute;
top: 0px; width: 100%;
color: #ffffff;
padding: 5px;
font-family:Monospace;
font-size:13px;
font-weight: bold;
text-align:center;
}
a {
color: #ffffff;
}
</style>
</head>
<body>
<div id="container"></div>
<div id="info">three.js webgl - equirectangular panorama demo. photo by Jón Ragnarsson.<input id="changeTexture" type="button" value="Memory leak test" onclick="changeTexture();"> </div>
<script src="./build/three.min.js"></script>
<script>
var idTexture='id1';
var camera, scene, renderer;
var fov = 70,
texture_placeholder,
isUserInteracting = false,
onMouseDownMouseX = 0, onMouseDownMouseY = 0,
lon = 0, onMouseDownLon = 0,
lat = 0, onMouseDownLat = 0,
phi = 0, theta = 0;
var materialsT, materialPano;
init();
animate();
function init() {
var container, mesh;
container = document.getElementById( 'container' );
camera = new THREE.PerspectiveCamera( fov, window.innerWidth / window.innerHeight, 1, 1100 );
camera.target = new THREE.Vector3( 0, 0, 0 );
scene = new THREE.Scene();
renderer = new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
//put here your 6 textures, one per face. Of course, you can use the same texture for all the faces
materialsT = [
loadTexture( idTexture+'.jpg' ),
loadTexture( idTexture+'.jpg' ),
loadTexture( idTexture+'.jpg' ),
loadTexture(idTexture+'.jpg' ),
loadTexture( idTexture+'.jpg' ),
loadTexture( idTexture+'.jpg' )
];
materialPano=new THREE.MeshFaceMaterial( materialsT );
mesh = new THREE.Mesh( new THREE.CubeGeometry( 256, 256, 256, 7, 7, 7 ),materialPano );
mesh.scale.x = -1;
scene.add( mesh );
container.appendChild( renderer.domElement );
document.addEventListener( 'mousedown', onDocumentMouseDown, false );
document.addEventListener( 'mousemove', onDocumentMouseMove, false );
document.addEventListener( 'mouseup', onDocumentMouseUp, false );
document.addEventListener( 'mousewheel', onDocumentMouseWheel, false );
document.addEventListener( 'DOMMouseScroll', onDocumentMouseWheel, false);
window.addEventListener( 'resize', onWindowResize, false );
}
function loadTexture( path ) {
var texture = new THREE.Texture( texture_placeholder );
var material = new THREE.MeshBasicMaterial( { map: texture, overdraw: true } );
var image = new Image();
image.onload = function () {
texture.needsUpdate = true;
material.map.image = this;
render();
};
image.src = path;
texture.deallocate();
renderer.deallocateTexture( texture );
return material;
}
function changeTexture(){
idCourant=(idTexture=='id1')?'id2':'id1';
//you can use the same texture, there is still the memory leak
materialsT = [
loadTexture( idTexture+'.jpg' ),
loadTexture( idTexture+'.jpg'),
loadTexture( idTexture+'.jpg' ),
loadTexture( idTexture+'.jpg' ),
loadTexture( idTexture+'.jpg'),
loadTexture( idTexture+'.jpg' )
];
materialPano.materials=materialsT;//without this line, I don't have memory leak but I can't change the texture. So it means that loadTexture doesn't have memory leak since the memory doesn't increase when I click on the button with only materialsT=[loadTexture(...),...] in the changeTexture() function.
//I try with the following (put before materialPano.materials=materialsT; of course) to deallocate materialPano but it doesn't do anything
/*
renderer.deallocateTexture(materialPano.materials);
renderer.deallocateMaterial(materialPano.materials);
renderer.deallocateObject(materialPano.materials);
renderer.deallocateTexture(materialPano);
renderer.deallocateMaterial(materialPano);
renderer.deallocateObject(materialPano);
//*/
}
function onWindowResize() {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize( window.innerWidth, window.innerHeight );
}
function onDocumentMouseDown( event ) {
event.preventDefault();
isUserInteracting = true;
onPointerDownPointerX = event.clientX;
onPointerDownPointerY = event.clientY;
onPointerDownLon = lon;
onPointerDownLat = lat;
}
function onDocumentMouseMove( event ) {
if ( isUserInteracting ) {
lon = ( onPointerDownPointerX - event.clientX ) * 0.1 + onPointerDownLon;
lat = ( event.clientY - onPointerDownPointerY ) * 0.1 + onPointerDownLat;
}
}
function onDocumentMouseUp( event ) {
isUserInteracting = false;
}
function onDocumentMouseWheel( event ) {
// WebKit
if ( event.wheelDeltaY ) {
fov -= event.wheelDeltaY * 0.05;
// Opera / Explorer 9
} else if ( event.wheelDelta ) {
fov -= event.wheelDelta * 0.05;
// Firefox
} else if ( event.detail ) {
fov += event.detail * 1.0;
}
camera.projectionMatrix.makePerspective( fov, window.innerWidth / window.innerHeight, 1, 1100 );
render();
}
function animate() {
requestAnimationFrame( animate );
render();
}
function render() {
lat = Math.max( - 85, Math.min( 85, lat ) );
phi = ( 90 - lat ) * Math.PI / 180;
theta = lon * Math.PI / 180;
camera.target.x = 500 * Math.sin( phi ) * Math.cos( theta );
camera.target.y = 500 * Math.cos( phi );
camera.target.z = 500 * Math.sin( phi ) * Math.sin( theta );
camera.lookAt( camera.target );
renderer.render( scene, camera );
}
</script>
</body>
I don't know how and where to use mesh.material.materials[ i ].map = texture; texture.needsUpdate = true; as you told me.
Thanks!
Sorry for the wait.
You can test the code here : http://jodyj.com/test/
There are the canvas version and the webgl version.
Just click on the button to see memory increase (in the process window for example).
For the canvas version, the memory leak appears on Firefox and Chrome but not IE.
For the webgl version (not IE), Firefox and Chrome have the memory leak.
I commented the code and I indicated which line creates the leak (materialPano.materials=materialsT; ).
I tested with the function deallocate on mesh, material but it doesn't change anything.
Thanks!
Problem solved for the webgl version.
Add :
for(var k=0;k<materialsT.length;k++){
renderer.deallocateTexture(materialsT[k].map);
renderer.deallocateTexture(materialsT[k]);
materialsT[k].deallocate();
materialsT[k].map.deallocate();
}
in the changeTexture function just before materialsT=[ loadTexture(...), ...]., and
texture.deallocate();
renderer.deallocateTexture( texture );
in the loadTexture function just before the return.
But, there is still a leak for the canvas version.
Using D3D10, I am drawing a 2d rectangle and want to fill it with a texture (bitmap) that should change a few times every second (like displaying video).
I am using a shader effect, with a Texture2D variable, and trying to update a ID3D10EffectShaderResourceVariable and redraw the mesh.
My actual usage will be by copying bitmaps from memory, and using UpdateSubresource.
But it did not work, so I reduced it to test switching between two DDS images.
The result is that it draws the first image as expected, but keeps drawing it instead of switching between the two images.
I am new to D3D. Can you explain if this method can work at all, or suggest the right way to do it.
The shader effect:
Texture2D txDiffuse;
SamplerState samLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
struct VS_INPUT
{
float4 Pos : POSITION;
float2 Tex : TEXCOORD;
};
struct PS_INPUT
{
float4 Pos : SV_POSITION;
float2 Tex : TEXCOORD0;
};
PS_INPUT VS( VS_INPUT input )
{
PS_INPUT output = (PS_INPUT)0;
output.Pos = input.Pos;
output.Tex = input.Tex;
return output;
}
float4 PS( PS_INPUT input) : SV_Target
{
return txDiffuse.Sample( samLinear, input.Tex );
}
technique10 Render
{
pass P0
{
SetVertexShader( CompileShader( vs_4_0, VS() ) );
SetGeometryShader( NULL );
SetPixelShader( CompileShader( ps_4_0, PS() ) );
}
}
Code (skipped many parts):
ID3D10ShaderResourceView* g_pTextureRV = NULL;
ID3D10EffectShaderResourceVariable* g_pDiffuseVariable = NULL;
D3DX10CreateEffectFromResource(gInstance, MAKEINTRESOURCE(IDR_RCDATA1), NULL, NULL, NULL, "fx_4_0", dwShaderFlags, 0, device, NULL, NULL, &g_pEffect, NULL, NULL);
g_pTechnique = g_pEffect->GetTechniqueByName( "Render" );
g_pDiffuseVariable = g_pEffect->GetVariableByName( "txDiffuse" )->AsShaderResource();
// this part is called on Frame render:
device->CreateRenderTargetView( backBuffer, NULL, &rtView);
device->ClearRenderTargetView( rtView, ClearColor );
if(g_pTextureRV != NULL) {
g_pTextureRV->Release();
g_pTextureRV = NULL;
}
D3DX10CreateShaderResourceViewFromFile(device, pCurrentDDSFilePath, NULL, NULL, &g_pTextureRV, NULL );
g_pDiffuseVariable->SetResource( g_pTextureRV );
D3D10_TECHNIQUE_DESC techDesc;
g_pTechnique->GetDesc( &techDesc );
for( UINT p = 0; p < techDesc.Passes; ++p )
{
g_pTechnique->GetPassByIndex( p )->Apply( 0 );
direct2dDrawingContext->dev->Draw( 6, 0 );
}
// ... present the current back buffer
One solution, not necessarily the best, but one that doesn't use custom shaders, follows (I wrote it in C# / Managed DirectX but it should be easy to transcode.)
Bitmap bmp; //the bitmap that you will use to update the texture
Texture tex; //the texture that DirectX will render
void Render()
{
//render some stuff
bmp = GetNextTextureFrame(); //whatever you do to update your bitmap
Surface s = tex.GetSurfaceLevel(0);
Graphics g = s.GetGraphics();
//IntPtr hdc = g.GetHdc();
//BitBlt(hdc, 0, 0, bmp.Width, bmp.Height, bmpHdc, 0, 0, 0xcc0020);
g.DrawImageUnscaled(bmp, 0, 0);
g.ReleaseHdc(hdc);
s.ReleaseGraphics();
device.SetTexture(0, tex);
//now render your primitives
//render some more stuff
//present
}
The commented out lines are the way I actually did it, using an hBitmap and DC with BitBlt, because it's faster than GDI+. A lot of people will probably tell you that the above is a bad way to do it, because of all the memory locking that has to occur, and they're probably right. But I was able to achieve 30fps with multiple 1920x1080 textures, so regardless of whether it's proper, it works.