Why is glFramebufferTexture a NULL pointer when GLEW_ARB_framebuffer_object is non-zero? - linux

My code looks roughly like this:
GLenum glewStatus = glewInit();
if (glewStatus != GLEW_OK)
exit(1);
if (!GLEW_ARB_framebuffer_object)
exit(1);
printf("%p\n", glFramebufferTexture);
This prints 0, so that explains why calling glFramebufferTexture() immediately segfaults.
However, why is it 0? A lot of the other framebuffer functions are working just fine (e.g. glBindFramebuffer, glFramebufferRenderbuffer, and glBindFramebuffer).
Do I need to initialise GLEW or the extension differently?

glFramebufferTexture() is a newer entry point than glBindFramebuffer() and other FBO related entry points. In the core OpenGL spec, glFramebufferTexture() was added in 3.2, while the rest of the FBO functionality was part of 3.0. glFramebufferTexture() was also not part of ARB_framebuffer_object.
You can use glFramebufferTexture2D() in most cases, which is part of the original FBO functionality. glFramebufferTexture() is only different for things like texture arrays, cube maps, etc.

Related

Telling PixiJS that the WebGL state has been modified externally

I am trying to integrate PixiJS with an existing custom WebGL engine. The existing custom engine is the host and handles control to PixiJS every frame. The existing custom engine configures the WebGL state to an "almost" default state and then calls into PixiJS; after PixiJS is done, the existing custom engine does a full reset of the WebGL state.
In code:
onFrame() {
resetWebGLStateToDefault(gl);
gl.bindFramebuffer(...)
gl.viewport(...)
thenWeUsePixiJSToDoSomeAdvancedStuff();
resetWebGLStateToDefault(gl);
}
My question
In thenWeUsePixiJSToDoSomeAdvancedStuff(), how can I tell PixiJS that the state is not what it used to be the previous time that it ran? Pretty much everything has been reset; PixiJS should assume that everything is default and I would also like to tell PixiJS what the current viewport and framebuffer are.
I tried Renderer.reset, StateSystem.reset, StateSystem.forceState but I guess that's not enough; PixiJS keeps assuming that some textures that it has set previously are still bound (they are not, the existing custom engine unbinds everything) and I get lots of [.WebGL-0x7fd105545e00]RENDER WARNING: there is no texture bound to the unit ?. Pretty much for all texture units, 1-15, except the first one.
Edit
It's probably worth mentioning that I am calling into the renderer directly; I think I need to because the existing custom engine owns the render loop. I am basically trying something like this, but I am getting the WebGL texture errors after the first frame.
renderer.reset();
renderer.render(sprite);
renderer.reset();
Edit
I tried the same thing with an autoStart: false application, and I get the same error.
pixiApp.renderer.reset();
pixiApp.render();
pixiApp.renderer.reset();
The issue appears to be that I was calling into PixiJS with a currently bound FBO; I fixed all my problems by creating a separate PIXI.RenderTexture, rendering there, and then compositing on top of my WebGL engine using a fullscreen quad.
// Create a render texture
const renderTexture = PIXI.RenderTexture.create(640, 360);
// Render with PixiJS
renderer.reset();
renderer.render(this.stage, renderTexture);
renderer.reset();
// Retrieve the raw WebGL texture
const texture = renderTexture.baseTexture._glTextures[renderer.texture.CONTEXT_UID].texture;
// Now composite on top of the other engine
gl.bindFramebuffer(gl.FRAMEBUFFER, theFramebufferWhereINeededPixiJSToRenderInTheFirstPlace);
gl.useProgram(quadProgram);
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.uniform1i(u_Texture, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, quadBuffer);
gl.vertexAttribPointer(0, 2, gl.BYTE, false, 2, 0);
gl.enableVertexAttribArray(0);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
gl.useProgram(null);
You may need to resize() the renderer and/or the render texture, depending on your actual setup.

Can't release the ID3DX11Effect pointer?

I use Effect framework in my demo.
here are my defination in header file
ID3D11Buffer* m_pVertexBuffer;
ID3D11Buffer* m_pIndexBuffer;
ID3D11InputLayout* m_pInputLayout;
ID3DX11Effect* m_pFx;
ID3DX11EffectTechnique* m_pTechnique;
ID3DX11EffectMatrixVariable* m_pFxWorldViewProj;
I release them as follow
void HillsDemo::UnLoadContent()
{
if (m_pVertexBuffer) m_pVertexBuffer->Release();
if (m_pIndexBuffer) m_pIndexBuffer->Release();
if (m_pInputLayout) m_pInputLayout->Release();
if (m_pTechnique) m_pTechnique->Release();
if (m_pFx) m_pFx->Release();
}
then I run the demo,when I close the demo window, there is an error which means HillsDemo has stopped working.Why?Then I delete the lineif (m_pFx) m_pFx->Release();,there is no error. So is releasing the m_pFx make the error?
I view the documents of Effect11 https://github.com/Microsoft/FX11/wiki/Effects-11, there are on the document:
The following types are now derived from IUnknown: ID3DX11EffectType, ID3DX11EffectVariable, I3DX11EffectPass, ID3DX11EffectTechnique, ID3DX11EffectGroup. Note that these objects do not follow standard COM reference-counting rules and they are released when the parent ID3DX11Effect is released. This is mostly to simplify use of Effects 11 from managed languages.
Does it means that I should only release the m_pFx rather than release both m_pFx and m_pTechnique? Any ideas?
I assume you are using the latest version of Effects 11 from GitHub rather than the legacy DirectX SDK copy based on your reference to the project wiki.
You should really look at using ComPtr instead of raw pointers to COM objects. In the case of Effects 11, you'd just use them for ID3DX11Effect instances. You should stick with 'raw pointers' for ID3DX11EffectType, ID3DX11EffectVariable, I3DX11EffectPass, ID3DX11EffectTechnique, and ID3DX11EffectGroup. Don't bother calling Release on these raw pointers as the lifetime controlled by the ID3DX11Effect.
Something like:
Microsoft::WRL::ComPtr<ID3D11Buffer> m_pVertexBuffer;
Microsoft::WRL::ComPtr<ID3D11Buffer> m_pIndexBuffer;
Microsoft::WRL::ComPtr<ID3D11InputLayout> m_pInputLayout;
Microsoft::WRL::ComPtr<ID3DX11Effect> m_pFx;
ID3DX11EffectTechnique* m_pTechnique;
ID3DX11EffectMatrixVariable* m_pFxWorldViewProj;
void HillsDemo::UnLoadContent()
{
m_pVertexBuffer.Reset();
m_pIndexBuffer.Reset();
m_pInputLayout.Reset();
m_pFx.Reset();
m_pTechnique = nullptr;
pFxWorldViewProj = nullptr;
}

OpenCL clCreateFromGLTexture using a different texture target

The aim of my project is to get live camera feed from on an Android device, use OpenCL to perform real-time filtering on those images and render the output on display.
I aim to do this in real-time that's why I am using OpenCL-OpenGL interop.
I have successfully managed to create a shared context using EGLContext and EGLDisplay. Now I am trying to use clCreateFromGLTexture so I can access these images in OpenCL kernel. The problem however is android requires that when bind the texture the target must be GL_TEXTURE_EXTERNAL_OES as it says here: (http://developer.android.com/reference/android/graphics/SurfaceTexture.html), however this texture target is not valid texture target when using clCreateFromGLTexture (https://www.khronos.org/registry/cl/sdk/1.1/docs/man/xhtml/clCreateFromGLTexture2D.html).
So I am not sure how to go about this.
This is how I create a GL Texture in android:
GLES20.glGenTextures(1, texture_id, 0);
GLES20.glBindTexture(texture_target, texture_target);
and this is how I am trying to create a cl memory object:
glTexImage2D(texture_target, 0, GL_RGBA, 640, 480, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
cl_mem camera_image = clCreateFromGLTexture(m_context, CL_MEM_READ_WRITE, texture_target, 0, texture_id, &err);
The error I get when I try to create cl memory object from GL texture is CL_INVALID_VALUE.
I am pretty new to OpenGL so there could be something basic I might have over looked.
The texture you receive from the camera is not the usual texture you'd expect. You even have to specify the extension in a shader if you read from it.
You need to perform an additional copy from the GL_TEXTURE_EXTERNAL_OES target to another texture which is created in the usual way. With luck you can bind both of them into fbo's and then just issue blit. If that doesn't work you can always use the normal texture as a rendertarget and simply draw a quad textured with the camera image.

Problems using MemoryMappedFile class on Mono

I'm trying to port a new versio of the Isis2 library from .NET on Windows to Mono/Linux. This new code uses MemoryMappedFile objects, and I suddenly am running into issues with the Mono.Posix.Helper library. I believe that my issues would vanish if I could successfully compile and run the following test program:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO.MemoryMappedFiles;
namespace foobar
{
class Program
{
static int CAPACITY = 100000;
static void Main(string[] args)
{
MemoryMappedFile mmf = MemoryMappedFile.CreateNew("test", CAPACITY);
MemoryMappedViewAccessor mva = mmf.CreateViewAccessor();
for (int n = 0; n < CAPACITY; n++)
{
byte b = (byte)(n & 0xFF);
mva.Write<byte>(n, ref b);
}
}
}
}
... at present, when I try to compile this on Mono I get a bewildering set of linker errors: it seems unable to find libMonoPosixHelper.so, although my LD_LIBRARY_PATH includes the directory containing that file, and then if I manage to get past that stage, I get "System.NotImplementedException: The requested feature is not implemented." at runtime. Yet I've looked at the Mono implementation of the CreateNew method; it seems fully implemented, and the same is true for the CreateViewAccessor method. Thus I have a sense that something is going badly wrong when linking to the Mono libraries.
Does anyone have experience with MemoryMappedFile objects under Mono? I see quite a few questions about this kind of issue here and on other sites, but all seem to be old threads...
OK, I figured at least part of this out by inspection of the Mono code implementing this API. In fact they implemented CreateNew in a way that departs pretty drastically from the .NET API, causing these methods to behave very differently from what you would expect.
For CreateNew, they actually require that the file name you specify be the name of an existing Linux file of size at least as large as the capacity you specify, and also do some other checks for access permissions (of course), exclusive access (which is at odds with sharing...) and to make sure the capacity you requested is > 0. So if you had the file previously open, or someone else does, this will fail -- in contrast to .NET, where you explicitly use memory-mapped files for sharing.
In contrast, CreateOrOpen appears to be "more or less" correctly implemented; switching to this version seems to solve the problem. To get the effect of CreateNew, do a Delete first, wrapping it in a try/catch to catch IOException if the file doesn't exist. Then use File.WriteAllBytes to create a file with your desired content. Then call CreateOrOpen. Now this sounds dumb, but it works. Obviously you can't guarantee atomicity this way (three operations rather than one), but at least you get the desired functionality.
I can live with these restrictions as it works out, but they may surprise others, and are totally different from the .NET API definition for MemoryMappedFile.
As for my linking issues, as far as I can tell there is a situation in which Mono doesn't use the LD_LIBRARY_PATH you specify correctly and hence can't find the .so file or .dll file you used. I'll post more on this if I can precisely pin down the circumstances -- on this one, I've worked around the issue by statically linking to the library.

Is there a Direct 3D equivalent of glxinfo?

I need to make sure my machine can create a D3D window before even trying to open it. How can I do so?
I've found the equivalent of glxinfo for directX -- it's called dxdiag and is provided by Microsoft. This lets you output an xml file with a D3dStatus field (which says "not available" in my case).
You probably want to take a look at DeviceCaps. It should be able to tell you the capabilities of the device so that you don't try to create a window that it doesn't support.
Actually glxinfo does create a OpenGL window and creates a OpenGL context, but never maps it to the screen. One must create a OpenGL context to get all the information, like glxinfo does.
If you use Direct3D11 you can use this code
// Determines feature level without creating a device.
D3D_FEATURE_LEVEL determineHighestSupportedFeatureLevel()
{
HRESULT hr = E_FAIL;
D3D_FEATURE_LEVEL FeatureLevel;
hr = D3D11CreateDevice(
nullptr,
D3D_DRIVER_TYPE_HARDWARE,
nullptr,
0,
nullptr,
0,
D3D11_SDK_VERSION,
nullptr,
&FeatureLevel,
nullptr );
if(FAILED(hr))
{
throw std::runtime_exception("Determine the highest supported Direct3D11 feature level failed.");
}
return FeatureLevel;
}

Resources