How to return QSGTexture pointer from QOpenGLTexture in Qt6 - qt6

I am rendering YUV video frame using Qt6 QML interface, in order to implement the material shader object, I have to implement updateSampledImage(), where a pointer to QSGTexture should be returned. however I only know how to get QOpenGLTexture, how should I convert this object to QSGTexture?

Related

What is the purpose of calling gl.enableVertexAttribArray(); function in WebGL

In WebGL before we call the drawArrays() function we call this function called enableVertexAttribArray() passing the reference to a variable in my compiled glsl program which stores the vertex positions of the object we are going to draw. Can someone explain to me what it does and why we have to call that function.
The attributes in a vertex shader are disabled by default. To use one, you have to first enable it with this function.
You still need to bound a buffer to an enabled attribute, the specification says this:
If a vertex attribute is enabled as an array via enableVertexAttribArray but no buffer is bound to that attribute via bindBuffer and vertexAttribPointer, then calls to drawArrays or drawElements will generate an INVALID_OPERATION error.

Telling PixiJS that the WebGL state has been modified externally

I am trying to integrate PixiJS with an existing custom WebGL engine. The existing custom engine is the host and handles control to PixiJS every frame. The existing custom engine configures the WebGL state to an "almost" default state and then calls into PixiJS; after PixiJS is done, the existing custom engine does a full reset of the WebGL state.
In code:
onFrame() {
resetWebGLStateToDefault(gl);
gl.bindFramebuffer(...)
gl.viewport(...)
thenWeUsePixiJSToDoSomeAdvancedStuff();
resetWebGLStateToDefault(gl);
}
My question
In thenWeUsePixiJSToDoSomeAdvancedStuff(), how can I tell PixiJS that the state is not what it used to be the previous time that it ran? Pretty much everything has been reset; PixiJS should assume that everything is default and I would also like to tell PixiJS what the current viewport and framebuffer are.
I tried Renderer.reset, StateSystem.reset, StateSystem.forceState but I guess that's not enough; PixiJS keeps assuming that some textures that it has set previously are still bound (they are not, the existing custom engine unbinds everything) and I get lots of [.WebGL-0x7fd105545e00]RENDER WARNING: there is no texture bound to the unit ?. Pretty much for all texture units, 1-15, except the first one.
Edit
It's probably worth mentioning that I am calling into the renderer directly; I think I need to because the existing custom engine owns the render loop. I am basically trying something like this, but I am getting the WebGL texture errors after the first frame.
renderer.reset();
renderer.render(sprite);
renderer.reset();
Edit
I tried the same thing with an autoStart: false application, and I get the same error.
pixiApp.renderer.reset();
pixiApp.render();
pixiApp.renderer.reset();
The issue appears to be that I was calling into PixiJS with a currently bound FBO; I fixed all my problems by creating a separate PIXI.RenderTexture, rendering there, and then compositing on top of my WebGL engine using a fullscreen quad.
// Create a render texture
const renderTexture = PIXI.RenderTexture.create(640, 360);
// Render with PixiJS
renderer.reset();
renderer.render(this.stage, renderTexture);
renderer.reset();
// Retrieve the raw WebGL texture
const texture = renderTexture.baseTexture._glTextures[renderer.texture.CONTEXT_UID].texture;
// Now composite on top of the other engine
gl.bindFramebuffer(gl.FRAMEBUFFER, theFramebufferWhereINeededPixiJSToRenderInTheFirstPlace);
gl.useProgram(quadProgram);
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.uniform1i(u_Texture, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, quadBuffer);
gl.vertexAttribPointer(0, 2, gl.BYTE, false, 2, 0);
gl.enableVertexAttribArray(0);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
gl.useProgram(null);
You may need to resize() the renderer and/or the render texture, depending on your actual setup.

Serializing works but unserializing crashes

I'm trying to set up a save function in my HaxeFlixel game.
Some background: The object in question is an instance of Player, which extends FlxSprite. Save data is stored in an instance of a custom class I made for it. That instance is stored in a StringMap (the keys are save names), which is saved by serializing it to a variable in a FlxSave.
Creating the save data and writing it works fine. However, reading the save data back in crashes the game with the message "Invalid field: pixels". pixels is a field from FlxSprite, but it's not the first such field in the serialized string, so it's probably not that.
If it's useful, the declaration of that field is y6:pixelsn - that is:
y begin a field, which is named...
6: a string of length 6...
pixels (the string)
n null
From this line of code you can see that pixels is actually not a variable* at runtime. So the unserializer would crash when it tries to assign value to pixels. But more investigation is need on why the serializer serialized the pixels fields at the first place, because it shouldn't really exist at runtime.
Note*: the accessors of pixels are (get, set), which makes pixels not a real property at runtime. Read more here.
As a general rule, I don't recommend serializing a FlxSprite (or other complex objects) directly. Rather, you should extract the desired information (e.g. x/y position or hp, etc) and serialize only those.

OpenCL clCreateFromGLTexture using a different texture target

The aim of my project is to get live camera feed from on an Android device, use OpenCL to perform real-time filtering on those images and render the output on display.
I aim to do this in real-time that's why I am using OpenCL-OpenGL interop.
I have successfully managed to create a shared context using EGLContext and EGLDisplay. Now I am trying to use clCreateFromGLTexture so I can access these images in OpenCL kernel. The problem however is android requires that when bind the texture the target must be GL_TEXTURE_EXTERNAL_OES as it says here: (http://developer.android.com/reference/android/graphics/SurfaceTexture.html), however this texture target is not valid texture target when using clCreateFromGLTexture (https://www.khronos.org/registry/cl/sdk/1.1/docs/man/xhtml/clCreateFromGLTexture2D.html).
So I am not sure how to go about this.
This is how I create a GL Texture in android:
GLES20.glGenTextures(1, texture_id, 0);
GLES20.glBindTexture(texture_target, texture_target);
and this is how I am trying to create a cl memory object:
glTexImage2D(texture_target, 0, GL_RGBA, 640, 480, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
cl_mem camera_image = clCreateFromGLTexture(m_context, CL_MEM_READ_WRITE, texture_target, 0, texture_id, &err);
The error I get when I try to create cl memory object from GL texture is CL_INVALID_VALUE.
I am pretty new to OpenGL so there could be something basic I might have over looked.
The texture you receive from the camera is not the usual texture you'd expect. You even have to specify the extension in a shader if you read from it.
You need to perform an additional copy from the GL_TEXTURE_EXTERNAL_OES target to another texture which is created in the usual way. With luck you can bind both of them into fbo's and then just issue blit. If that doesn't work you can always use the normal texture as a rendertarget and simply draw a quad textured with the camera image.

NSOpenGLLayer and multithreading

I have written a 3D viewer using Cocoa. The OpenGL renderings are performed in a separate thread that creates its own NSOpenGLContext.
Without layer handling, the 3D view drawRect method is called on refresh, and the OpenGL thread does its refresh and every things works perfectly...
Now, I have to implement the application using Cocoa layers. When the 3D NSView is created, a subclass of NSOpenGLLayer is created and attached to the view. The method
(void)drawInOpenGLContext:NSOpenGLContext *)ctx
pixelFormat:(NSOpenGLPixelFormat *)pixelFormat
forLayerTime:(CFTimeInterval)timeInterval
displayTime:(const CVTimeStamp *)timeStamp;
is called by Cocoa, but I am unable to make my OpenGL thread render anything.
I had tried to use the OpenGL context passed to drawInOpenGLContext in the OpenGL thread, I have tried to do a
[layer setOpenGLContext:ctx]
in the OpenGL thread with the OpenGL context created in the thread, and so on, but nothing works.
Did you remember to call setWantsLayer on your NSView? You need to call that from your NSView probably to make it a layer-hosted view. See the documentation for NSView setWantsLayer.
_testOglView = [[NSView alloc]initWithFrame:[self.view bounds]];
[_testOglView setLayer:[[TestOpenGLLayer alloc] init]];
[_testOglView setWantsLayer:YES];
[self.view addSubview:_testOglView];
In my TestOpenGLLayer class, I only had to define the drawInOpenGLContext function. I added my opeNGL commands to it and the view rendered the layer properly. I did not have to call [layer setContext].

Resources