rajawali and vuforia 3d model positioning - vuforia

I have followed the RajawaliVuforia tutorial and integrated the rajawali with vuforia CloudReco and i am able to get the 3D model but model is not positioned properly in target image center and also if i move camera close or up, the model is positioning out of the target image. Can someone let me know what could be the issue.

Vuforia passes the position (Vector3) and orientation (Quaternion) to Rajawali. Rajawali then uses this to position and rotate the model. This might interfere with animations applied to the model. If you're using animations or if you're setting the position manually you'll get unpredictable results. The reason for this is that the position is set twice on each frame.
The way to fix this is to put your 3D model in a container (an empty BaseObject3D). Vuforia's position and orientation will be applied to this container and not your 3D model. This way you can animate the model without getting unpredictable results.

Related

HUD post-processing in Godot

I have a project in Godot that renders billboarded quads on top of enemies. The quads(meshinstances) are children nodes of the enemy nodes. I want to render just the quads to a viewport for post-processing, but the quads need to have the same position on screen that the enemies have (like a target reticle).
How can I apply a shader effect to the billboards only and not to the rest of the scene?
The Camera has a cull_mask property that lets you filter which layers it will see. And VisualInstances (such as MeshInstances) have a layers property that lets you specify to which layers they belong (by default they all belong to the first layer).
You can configure the layers names in Project Settings -> General -> Layer Names -> 3d Render. Do not confuse them with 3d Physics.
Thus, you can give a different layer to those quad MeshInstance you want, and then setup a new Viewport with a new Camera as child (make sure it is current) with a cull_mask that will only render that layer. That way only those MeshInstance will be rendered on that Viewport.
You probably want to to keep the properties of the Camera in sync with the Camera on main Viewport (not only its global_transform, but also fov or any other property you might change). You can archive this by copying its properties on _process of a script attached to the Camera which is inside the new Viewport.

Xamarin IOS Opentk - BlendFunc with transparent textures

I'm trying to render some label textures with transparent background using OpenTK in Xamarin. At first the labels seemed display properly (see picture 1) but when the view rotated, the some label background are not transparent any more (see picture 2).
The enabled BlendFunc is GL.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.OneMinusSrcAlpha).
My question is how can I always have labels transparency on despite of their positions?
The same code and shader can run properly on Android Devices by the way.
Ah yes, the good old transparency problem. Unfortunately this is one that a graphics programmer has to solve on his own.
For just a few labels, the most straight foward solution is likely to sort your labels by z-depth and then render them from farthest to closest. You'd probably need to do some matrix math on that label position to adjust for viewport rotation.
For the 3d game I'm working on I have chosen to implement the order-independent transparency method called WBOIT by Morgan McGuire, which is fairly simple to implement and yields relatively good results.

Is there a way to define the camera parameters myself?

I am working with Shapenet dataset which contains 3D information. I want to create images out of that dataset by defining the camera intrinsics and extrinsics on my own (so its like I will be defining where my camera is with respect to object and what will be the focal length and optical center of camera). Is there a concrete way by which I can pick some values for these ?
PS : I can load the shapenet models in some 3D viewing software and if I could extract the camera parameters might be using (since at any particular time, it is showing me an image)

Finding 3d coordinates (xyz) of detected object. (Object detected using Haar cascade)

have been able to successfully detect an object using haar cascade classifier in python using opencv.
When the object is detected, a rectangle is shown around the object. The x and y position of this rectangle is known and so is the height and width (x,y,w,h).
The problem I have is trying the get the 3d coordinates form this rectangle (x,y,z).
I have not been able to find an resources on getting the z coordinate of the detected object.
I calibrated my camera and found the camera matrix and also the distortion coefficient but not sure next what to do next. I looked into pose estimation but do not think that will be able to get the 3D coordinates of this object.
Please any help will be sufficient. I just need to be pointed in the right direction so i can continue my project
Thank you.

Pygame sprites always on top example?

I was wondering if someone could give me simple example of how to always draw sprites in pygame on --the top layer-- whilst an animation is being blitted on screen?
The senario I'm trying to solve is having an animated sprite (eg a man walking on the spot) and various other objects passing him, whilst he is still animated.
My solution so far has the animation layering on top of the "passing objects" which is not what I want.
Thanks in advance.
I think you've partially answered your own question here.
It's a matter of organizing what gets drawn first. Some side-scrolling 2D games use a "layer" solution, in which there is a background layer, a middleground layer, and a foreground layer, and the drawing system renders one layer after another.
I've also seen Pokémon top-down style games simply sort the sprites by their vertical position, so sprites "nearest" to the camera are drawn last, and thus on top of the other sprites.
See how the current implementation of Scene2D in libGDX gives each Actor a z-index property which can later be used to organize Actors into layers.
Just draw your sprites with a LayeredUpdates group:
class pygame.sprite.LayeredUpdates
LayeredUpdates is a sprite group that handles layers and draws like OrderedUpdates.
LayeredUpdates(*spites, **kwargs) -> LayeredUpdates
You can set the default layer through kwargs using ‘default_layer’ and an integer for the layer. The default layer is 0.
If the sprite you add has an attribute layer then that layer will be used. If the **kwarg contains ‘layer’ then the sprites passed will be added to that layer (overriding the sprite.layer attribute). If neither sprite has attribute layer nor **kwarg then the default layer is used to add the sprites.
and give your sprites the correct layer value.

Resources