Telling PixiJS that the WebGL state has been modified externally - pixi.js

I am trying to integrate PixiJS with an existing custom WebGL engine. The existing custom engine is the host and handles control to PixiJS every frame. The existing custom engine configures the WebGL state to an "almost" default state and then calls into PixiJS; after PixiJS is done, the existing custom engine does a full reset of the WebGL state.
In code:
onFrame() {
resetWebGLStateToDefault(gl);
gl.bindFramebuffer(...)
gl.viewport(...)
thenWeUsePixiJSToDoSomeAdvancedStuff();
resetWebGLStateToDefault(gl);
}
My question
In thenWeUsePixiJSToDoSomeAdvancedStuff(), how can I tell PixiJS that the state is not what it used to be the previous time that it ran? Pretty much everything has been reset; PixiJS should assume that everything is default and I would also like to tell PixiJS what the current viewport and framebuffer are.
I tried Renderer.reset, StateSystem.reset, StateSystem.forceState but I guess that's not enough; PixiJS keeps assuming that some textures that it has set previously are still bound (they are not, the existing custom engine unbinds everything) and I get lots of [.WebGL-0x7fd105545e00]RENDER WARNING: there is no texture bound to the unit ?. Pretty much for all texture units, 1-15, except the first one.
Edit
It's probably worth mentioning that I am calling into the renderer directly; I think I need to because the existing custom engine owns the render loop. I am basically trying something like this, but I am getting the WebGL texture errors after the first frame.
renderer.reset();
renderer.render(sprite);
renderer.reset();
Edit
I tried the same thing with an autoStart: false application, and I get the same error.
pixiApp.renderer.reset();
pixiApp.render();
pixiApp.renderer.reset();

The issue appears to be that I was calling into PixiJS with a currently bound FBO; I fixed all my problems by creating a separate PIXI.RenderTexture, rendering there, and then compositing on top of my WebGL engine using a fullscreen quad.
// Create a render texture
const renderTexture = PIXI.RenderTexture.create(640, 360);
// Render with PixiJS
renderer.reset();
renderer.render(this.stage, renderTexture);
renderer.reset();
// Retrieve the raw WebGL texture
const texture = renderTexture.baseTexture._glTextures[renderer.texture.CONTEXT_UID].texture;
// Now composite on top of the other engine
gl.bindFramebuffer(gl.FRAMEBUFFER, theFramebufferWhereINeededPixiJSToRenderInTheFirstPlace);
gl.useProgram(quadProgram);
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.uniform1i(u_Texture, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, quadBuffer);
gl.vertexAttribPointer(0, 2, gl.BYTE, false, 2, 0);
gl.enableVertexAttribArray(0);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
gl.useProgram(null);
You may need to resize() the renderer and/or the render texture, depending on your actual setup.

Related

How to change the texture of AnimatedSprite programatically

I have created a base scene that I intend to use for all human characters of my game. I am using an AnimatedSprite where I defined different animations for the different positions of the character, all using a texture that contains all the frames.
This works for a specific character, but now I would like to create other characters. Since I am using a character generator, all the sprite sheets are basically the same, but with different clothes, accessories, etc. I would like to avoid replicating the animation definitions for the other characters. I could achieve that by setting a different texture on each instance of the scene, but I can't find a way to do it.
If I edit the tscn file and set a different image, it does what I want.
I tried updating the atlas property of the animation frames, but doing that affects all instances of the scene:
func update_texture(value: Texture):
for animation in $AnimatedSprite.frames.animations:
for frame in animation.frames:
frame.atlas = value
I also tried cloning a SpriteFrames instance, by calling duplicate(0), updating it with the above code, then setting $AnimatedSprite.frames, but this also updates all instances of the scene.
What is the proper way to change the texture of a specific instance of AnimatedSprite?
I found a solution. The problem was that the duplicate method does not perform a deep clone, so I was having references to the same frame instances.
Here's my updated version:
func update_texture(texture: Texture):
var reference_frames: SpriteFrames = $AnimatedSprite.frames
var updated_frames = SpriteFrames.new()
for animation in reference_frames.get_animation_names():
if animation != "default":
updated_frames.add_animation(animation)
updated_frames.set_animation_speed(animation, reference_frames.get_animation_speed(animation))
updated_frames.set_animation_loop(animation, reference_frames.get_animation_loop(animation))
for i in reference_frames.get_frame_count(animation):
var updated_texture: AtlasTexture = reference_frames.get_frame(animation, i).duplicate()
updated_texture.atlas = texture
updated_frames.add_frame(animation, updated_texture)
updated_frames.remove_animation("default")
$AnimatedSprite.frames = updated_frames

How can I simulate hand rays on HoloLens 1?

I'm setting a new project which is intended to deploy to both HoloLens 1 and 2, and I'd like to use hand rays in both, or at least be able to simulate them on HoloLens 1 in preparation for HoloLens 2.
As far as I have got is:
Customizing the InputSimulationService to be gesture only (so I can test in editor)
Adding the GGVHand Controller Type to DefaultControllerPointer Options in the MRTK/Pointers section.
This gets it to show up and respond to clicks both in editor and device, but it does not use the hand coordinates and instead raycasts forward from 0,0,0, which suggests that the GGV Hand Controller is providing a GripPosition (of course with no rotation due to HL1) but not providing a Pointer Pose.
I imagine the cleanest way to do this would be to add a pointer pose to the GGV Hand controller, or add (estimated) rotation to the GripPosition and use this as the Pose Action in the ShellHandRayPointer. I can't immediately see where to customize/insert this in the MRTK.
Alternatively, I could customize the DefaultControllerPointer prefab but I am hesitant to do so as the MRTK seems to still be undergoing frequent changes and this would likely lead to upgrade headaches.
You could create a custom pointer that would set the pointer's rotation to be inferred based on the hand position, and then like you suggested use Grip Pose instead of Pointer Pose for the Pose Action.
The code of your custom pointer would look something like this:
// Note you could extend ShellHandRayPointer if you wanted the beam bending,
// however configuring that pointer requires careful setup of asset.
public class HL1HandRay : LinePointer
{
public override Quaternion Rotation
{
get
{
// Set rotation to be line from head to head, rotated a bit
float sign = Controller.ControllerHandedness == Handedness.Right ? -1f : 1f;
return Quaternion.Euler(0, sign * 35, 0) * Quaternion.LookRotation(Position - CameraCache.Main.transform.position, Vector3.up);
}
}
// We cannot use the base IsInteractionEnabled
// Because HL1 hands are always set to have their "IsInPointing pose" field as false
// You may want to do more thorough checks here, following BaseControllerPointer implementation
public override bool IsInteractionEnabled => IsFocusLocked || IsTracked;
}
Then create a new pointer prefab and configure your pointer profile to use the new pointer prefab. Creating your own prefab instead of modifying MRTK prefabs has advantage of ensuring that MRTK updates will not overwrite your prefabs.
Here's some captures of the simple pointer prefab I made to test this with relevant changes highlighted:
And then the components I used:

Loading a .bmp file and rendering it as the background in DirectX 11

I have spent the afternoon looking over the documentation on the contexts / surfaces and followed quite a few guides but I just do not understand how this is done.
All I want is to use a bitmap (already loaded) and to put it into my scene as the background.
I heard that I have to use a surface and draw it first but I have absolutely no idea how to obtain the surface or how to assign the bitmap to it.
Any help is appreciated.
Yes one method is to use Surface, however I would recommend this method
I am not sure how you have loaded bitmap, anyhow you can use bitmap as background in this way
//Make texture object
LPDIRECT3DTEXTURE9 m_myBitmapTexture;
// During Initialization, Load texture from file
if(FAILED(D3DXCREATETEXTUREFROMFILE(device,"filepath\\file.bmp", 0, 0, 0, D3DMFT_UNKNOWN, D3DPOOL_DEFAULT, D3DX_DEFULT, D3DX_DEFAULT, 0x00000000, NULL, NULL, *m_myBitmapTexture)))
return E_FAIL;
// During Rendering, set texture
device->SetTexture(0, m_myBitmapTexture);
device->SetStreamSource(0, yourBuffer, 0, size(YourBufferStruct));
device->SetFVF(yourTextureFVF); // Setting flexible vertex format
device->DrawPrimitive(topologyType, startindex, totalIndex);
You just need to make sure, your buffer should have texture coordinates and your shader too
struct YourBufferStruct
{
D3DXVECTOR3 position;
D3DXVECTOR2 textureCoord;
}
// Define your flexible vertex format, i am just adding position and texture,
//well you can add color, normal whatever extra you want
#define yourTextureFVF (D3DFVF_XYZ | D3DFVF_TEX1)
Now add texture coordinates to shader too
For more details you can consult this link https://msdn.microsoft.com/en-us/library/windows/desktop/bb153262(v=vs.85).aspx

Dimensions of ImageMarker

I am new to Vuforia SDK. I have an image which acts as a target. I want to place this image on to the Imagemarker. In real time the size of the Imagemarker varies. Is there any method where I can get the width and height of the Imagemarker so that the target image fits exactly on the Imagemarker?
Since you did not specify if you are using the Unity or native APIs I will assume you are using Unity.
This is how you would go about it using the Vuforia API, placing this in a script attached to your ImageTarget GameObject.
IEnumerator Start()
{
Vuforia.ImageTarget img = GetComponent<Vuforia.ImageTargetBehaviour>().ImageTarget;
// This is rounded of in the console display,
// so individual components are printed afterwards
Debug.Log(img.GetSize());
Debug.Log(img.GetSize().x);
Debug.Log(img.GetSize().y);
Debug.Log(img.GetSize().z);
}
Alternatively you can directly use the Bounds of the renderer.
void Start()
{
Renderer r = GetComponent<Renderer>();
Debug.Log(r.bounds.size.x);
Debug.Log(r.bounds.size.y);
Debug.Log(r.bounds.size.z);
}
Needless to say this is just a quick solution, depending on the situation you might want to use this at runtime dynamically create content.
Yes, you can.
While placing the Image on the Image Marker to the relative size you want it to be, and when you run it you'll see that the size of the image will be relative to the Marker you've placed it on.

OpenCL clCreateFromGLTexture using a different texture target

The aim of my project is to get live camera feed from on an Android device, use OpenCL to perform real-time filtering on those images and render the output on display.
I aim to do this in real-time that's why I am using OpenCL-OpenGL interop.
I have successfully managed to create a shared context using EGLContext and EGLDisplay. Now I am trying to use clCreateFromGLTexture so I can access these images in OpenCL kernel. The problem however is android requires that when bind the texture the target must be GL_TEXTURE_EXTERNAL_OES as it says here: (http://developer.android.com/reference/android/graphics/SurfaceTexture.html), however this texture target is not valid texture target when using clCreateFromGLTexture (https://www.khronos.org/registry/cl/sdk/1.1/docs/man/xhtml/clCreateFromGLTexture2D.html).
So I am not sure how to go about this.
This is how I create a GL Texture in android:
GLES20.glGenTextures(1, texture_id, 0);
GLES20.glBindTexture(texture_target, texture_target);
and this is how I am trying to create a cl memory object:
glTexImage2D(texture_target, 0, GL_RGBA, 640, 480, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
cl_mem camera_image = clCreateFromGLTexture(m_context, CL_MEM_READ_WRITE, texture_target, 0, texture_id, &err);
The error I get when I try to create cl memory object from GL texture is CL_INVALID_VALUE.
I am pretty new to OpenGL so there could be something basic I might have over looked.
The texture you receive from the camera is not the usual texture you'd expect. You even have to specify the extension in a shader if you read from it.
You need to perform an additional copy from the GL_TEXTURE_EXTERNAL_OES target to another texture which is created in the usual way. With luck you can bind both of them into fbo's and then just issue blit. If that doesn't work you can always use the normal texture as a rendertarget and simply draw a quad textured with the camera image.

Resources