Get velocity or deviation of hand, during manipulation on Hololens2 with MRTK2.5 - position

What I'm trying to do:
I am using MRTK2.5.1 / Hololens2 and the OnlineMaps asset in Unity. I want to use my hand (by either touching map, or pointer from distance) to scroll the map. i.e. tap/grip the map, then drag hand around x/z plane.
What I've done previously:
With holotoolkit/hololens1, this was easily done with a listener for manipulation events.
The OnManipulationChanged event provided me with a CumulativeDelta value of how the hand position had changed since the start of the manipulation.
What I've tried in MRTK2.5:
I started off with ManipulationHandler, which gives me the pointer in eventdata. The pointer->controller has a velocity value, but that's always 0,0,0. I couldn't see anything else obvious relating to velocity or delta position of the thing (hand) triggering the manipulation.
The PointerHandler script has an OnPointerDragged event, but again has no property that looks like a velocity or delta position of hand.
Do i need to be using Gestures?
Not looking for code, just a brief explanation of the correct approach to get the hand velocity or deviation of hand, once the hand has tapped/clicked on the map.

Actually, ManipulationHandler component is deprecated and ObjectManipulator component is a replacement for manipulation behavior. So it is recommended you start with the Object Manipulator component to makes your map movable.
For your question about how to scroll the map around x/z plane, Constraint is aimed at limiting manipulation in some way. Once constraint enabled on your ObjectManipulator component, transform changes will be processed by constraints registered to the selected constraint manager. In your case, MoveAxisConstraint can meet your need, you can add MoveAxisConstraint to game object from Constraint Manager component and set the Constraint On Movement property of Move Axis Constraint component to Y Axis. For more information about MoveAxisConstraint please refer to: https://microsoft.github.io/MixedRealityToolkit-Unity/Documentation/README_ConstraintManager.html#moveaxisconstraint

Related

STEP (ISO 10303-21) Unset Attributes

I've been building a parser for STEP-formatted data (specifically the ISO 10303-21 standard), but I've run into a roadblock revolving around a single character - '$'.
A quick Google search reveals that in STEP, this character denotes an 'unset' value, I interpreted this as an uninitialized value, however I don't know exactly what I should do with it.
For example, take the following definitions:
#111=AXIS2_PLACEMENT_3D('Circle Axis2P3D',#109,#110,$) ;
#109=CARTESIAN_POINT('Axis2P3D Location',(104.14,0.,0.)) ;
#110=DIRECTION('Axis2P3D Direction',(1.,-0.,0.)) ;
To me I cannot see how this would even work, as the reference direction is uninitialized, and therefore an x-axis cannot be derived, meaning that anything using this Axis2Placement would also have undefined data.
Normally when I reach this point, I would just define some sort of default data for the given data-type (Vertices (0,0,0), Directions(1,0,0), etc.), however this doesn't seem to work, because there's the chance that my default direction would cause conflicts with the supplied data.
I have Googled what to do in this scenario, only to come up with nothing.
I also have a PDF for a very similar STEP format (ISO-10303-42), but it too doesn't mention any sort of default data, or provide any more insight in to how this works.
So, to explicitly state my question: what do I do with uninitialized data in STEP (ISO 10303-21) data?
You need to be able to represent 'unset' as a distinct value. It doesn't mean the same thing as an uninitialized value or a default value. For example you might represent AXIS2_PLACEMENT_3D as an object with data members that are pointers to point to CARTESIAN_POINT and DIRECTION, and the $ means that that pointer will be null in your representation.
Dealing with null values will depend on the context. It may be some kind of error if the data is really necessary. Or it may be that the data isn't necessary, such as if you don't need the axis to be oriented, and a point and direction are sufficient to represent the data.
In this case: #111 is local coordinate system with following 4 attributes:
character name;
pointer to origin (#109);
pointer to axis (#110);
pointer to second axis (reference direction).
If #111 is a coordinate system of circle (I guess it with 'name' value), axis is normal of circle plane, while reference direction points to start of circle (position of zero t parameter of circle). Since a circle is closed curve, you can locate this zero t position in arbitrary place, it has no influence on circle geometric shape, reference direction in this case is not mandatory, and is omitted. And it is your choice how you handle this situation.
When $ sign is used , the value is not required. Specifically, if there are a series of optional values and you want to specify a value for, let's say, 3rd optional argument and you don't want to specify values for the 1st and 2nd optional arguments, you can use $ sign for those two.
Take a look here for a better description.

Layout's location algo apply to filtered set of Vertexes

the job of the layout is to place vertexes at given locations. if the layout is iterative, then the layout's job is to iterate through an algo, moving the vertexes with each step, until the final layout configuration is achieved.
I have a multi-level graph - say 100 objects of type A; each A object has 10 objects as children; call the children type B objects.
I would like the layout location placement algos to operate on objects of type A only (let's say) - and ignore the B objects.
The cleanest way to achieve this objective might be to define a transform to expose those elements that should participate in the 'algo' placement operation via the step method.
Currently, the step methods, assuming they respect the lock flag at all, do their calculations including the locked vertexes first - so lock/unlock won't work in this case.
Is it possible to do this somehow without resorting to multiple graph objects?
If you want to ignore the B objects entirely, then the simplest option is to create a graph consisting only of the A objects, lay it out, and use the locations from that layout.
That said, it's not clear how you intend to assign locations to the B objects. And if the A objects aren't connected to each other at all, then this approach won't make much sense. (OTOH, if they aren't connected to each other then you're really just laying out a bunch of trees.)

UAV counter indices used across multiple shaders?

I've been trying to implement a Compute Shader based particle system.
I have a compute shader which builds a structured buffer of particles, using a UAV with the D3D11_BUFFER_UAV_FLAG_COUNTER flag.
When I add to this buffer, I check if this particle has any complex behaviours, which I want to filter out and perform in a separate compute shader. As an example, if the particle wants to perform collision detection, I add its index to another structured buffer, also with the D3D11_BUFFER_UAV_FLAG_COUNTER flag.
I then run a second compute shader, which processes all the indices, and applies collision detection to those particles.
However, in the second compute shader, I'd estimate that about 5% of the indices are wrong - they belong to other particles, which don't support collision detection.
Here's the compute shader code that perfroms the list building:
// append to destination buffer
uint dstIndex = g_dstParticles.IncrementCounter();
g_dstParticles[ dstIndex ] = particle;
// add to behaviour lists
if ( params.flags & EMITTER_FLAG_COLLISION )
{
uint behaviourIndex = g_behaviourCollisionIndices.IncrementCounter();
g_behaviourCollisionIndices[ behaviourIndex ] = dstIndex;
}
If I split out the "add to behaviour lists" bit into a separate compute shader, and run it after the particle lists are built, everything works perfectly. However I think I shouldn't need to do this - it's a waste of bandwidth going through all the particles again.
I suspect that IncrementCounter is actually not guaranteed to return a unique index into the UAV, and that there is some clever optimisation going on that means the index is only valid inside the compute shader it is used in. And thus my attempt to pass it to the second compute shader is not valid.
Can anyone give any concrete answers to what's going on here? And if there's a way for me to keep the filtering inside the same compute shader as my core update?
Thanks!
IncrementCounter is an atomic operation and so will (driver/hardware bugs notwithstanding) return a unique value to each thread that calls it.
Have you thought about using Append/Consume buffers for this, as it's what they were designed for? The first pass simply appends the complex collision particles to an AppendStructuredBuffer and the second pass consumes from the same buffer but using a ConsumeStructuredBuffer view instead. The second run of compute will need to use DispatchIndirect so you only run as many thread groups as necessary for the number in the list (something the CPU won't know).
The usual recommendations apply though, have you tried the D3D11 Debug Layer and running it on the reference device to be sure it isn't a driver issue?

Transparency in Progressive Photon Mapping in cuda

I am working on a project, which is based on optix. I need to use progressive photon mapping, hence I am trying to use the Progressive Photon Mapping from the samples, but the transparency material is not implemented.
I've googled a lot and also tried to understand other samples that contains transparency material (e.g. Glass, Tutorial, whitted). At last, I got the solution as follows;
Find the hit point (intersection point) (h below)
Generate another ray from that point
use the color of the new generated points
By following you can also find the code of that part, by I do not understand why I get black color(.0f, .0f, 0.f) for the new generated ray (part 3 above).
optix::Ray ray( h, t, rtpass_ray_type, scene_epsilon );
HitPRD refr_prd;
refr_prd.ray_depth = hit_prd.ray_depth+1;
refr_prd.importance = importance;
rtTrace( top_object, ray, refr_prd );
result += (1.0f - reflection) * refraction_color * refr_prd.attenuation;
Any idea will be appreciated.
Please note that refr_prd.attenuation should contains some colors, after using function rtTrace(). I've mentioned reflection and reflaction_color to help you better understand the procedure. You can simply ignore them.
There are a number of methods to diagnose your problem.
Isolate the contribution of the refracted ray, by removing any contribution of the reflection ray.
Make sure you have a miss program. HitPRD::attenuation needs to be written to by all of your closest hit programs and your miss programs. If you suspect the miss program is being called set your miss color to something obviously bad ([1,0,1] is my favorite).
Use rtPrintf in combination with rtContextSetPrintLaunchIndex or setPrintLaunchIndex to print out the individual values of the product to see which term is zero from a given pixel. If you don't restrict the output to a given launch index you will get too much output. You probably also want to print out the depth as well.

Re-positioning a Rigid Body in Bullet Physics

I am writing a character animation rendering engine that uses Bullet Physics as a physics simulation engine.
A sequence will start out with no model on the screen, then an animation will be assigned to that model, the model will be moved to frame 0 of the animation, and the engine will begin rendering the model with the animation.
What is the correct way to re-position the rigid bodies on the character model when it is initialized at frame 0?
Currently I am using this code, which is called immediately after the animation is assigned to the model and the bones are moved to the frame 0 position:
_world->removeRigidBody(_body);
bool k = (_type == Kinematics);
_body->setCollisionFlags(_body->getCollisionFlags() & ~btCollisionObject::CF_NO_CONTACT_RESPONSE);
btTransform tr = BulletPhysics::ConvertD3DXMatrix(&(_bone->getCombinedTrans()));
tr *= _trans;
_body->setCenterOfMassTransform(tr);
_body->clearForces();
_body->setLinearVelocity(btVector3(0,0,0));
_body->setAngularVelocity(btVector3(0,0,0));
_world->addRigidBody(_body, _groupID, _groupMask);
The issue is that sometimes this works, and other times not. For an example, take a skirt of a model. Sometimes it will show up in the natural position, other times slightly misaligned and it will fall into place, and other times it shows up completely clipped through the body, as if collision was turned off and some force pushed it in that direction. This does make sense most of the time, because in the test animation I am using the model's initial position is in the center of the screen, but the animation starts off the left side of the screen. Does anyone know how to solve this?
I know the bones on the skirt are not the problem, because I turned off physics and forced it to manually update the bone positions each frame, and everything was in the correct positions throughout the entire animation.
EDIT: I also have constraints, might that be what's causing this?
Here is my reposition method that does exactly this.
void LimbBt::reposition(btVector3 position,btVector3 orientation) {
btTransform initialTransform;
initialTransform.setOrigin(position);
initialTransform.setRotation(orientation);
mBody->setWorldTransform(initialTransform);
mMotionState->setWorldTransform(initialTransform);
}
The motion state mMotionState is the motion state you created for the btRigidBody in the beginning. Just add your clearForces() and velocities to it to stop the body from moving on from the new position as if it went through a portal. That should do it. It works nicely with me here.
Edit: The constraints will adapt if you reposition all rigidbodies correctly. For that purpose, it is easy to calculate the relative position and reposition the whole constrained rigidbody construct according to that. If you do it incorrectly, you will get severe twitching, as the constraints will try to adjust you construct numerically, causing high forces if the constraint gaps are large.
Edit2: Another issue is that if you need deterministic behavior (every time you reset your bodies, they should fall exactly the same), then you will have to kill your old dynamicsWorld, recreate it and add all the bodies again. The world stores some information about the bodies that just can not be cleared for now. This might change in the future as bullet4 is going to support deterministic resets. But for now, if you do experiments with deterministic resets, you need to drop the world and recreate it.
source: discussion with Erwin Coumans, the developer of Bullet Physics.
I can't tell you what causes the unusual outcome when moving rigid bodies but I can definitely sympathize!
There are three things you'll need to do in order to solve this:
Convert your rigid bodies to kinematic ones
Adjust the World Transform of the bodies motion state and NOT the rigid body
Convert the kinematic body back to a rigid body
A short tested code snippet effectively teleporting a rigid body by updating its motion state to its new position and orientation, plus nullifying all velocities and forces acting upon it.
void teleport(btVector3 position, btQuaternion& orientation) const {
btTransform transform;
transform.setIdentity();
transform.setOrigin(position);
transform.setRotation(orientation);
m_rigidBodyVehicle->setWorldTransform(transform);
m_rigidBodyVehicle->getMotionState()->setWorldTransform(transform);
m_rigidBodyVehicle->setLinearVelocity(btVector3(0.0f, 0.0f, 0.0f));
m_rigidBodyVehicle->setAngularVelocity(btVector3(0.0f, 0.0f, 0.0f));
m_rigidBodyVehicle->clearForces();
}

Resources