I would like to change the material of all the spatial meshes at runtime:
If I start the application in Room1, then walk to Room2 and change the material to "newMaterial" I can do that with the following code:
foreach (SpatialAwarenessMeshObject meshObject in observer.Meshes.Values)
{
if (meshObject?.GameObject == null)
continue;
meshObject.Renderer.sharedMaterial = newMaterial;
}
But the above code changes only the visible meshes (so the meshes in Room2). Because if I walk back to Room1 I still have the old material.
So how I can ensure that the material is changed with all the meshes, and not only the visible ones?
I'm using MRTK v2.53 and the XR SDK pipeline
Spatial observer is: WindowsMixedRealitySpatialMeshObserver
To sets the Material to be used when displaying Meshes, it is recommended to change the configuration of Spatial Awareness Mesh Observer instead of change every mesh object. Please refer to the following code:
IMixedRealityDataProviderAccess dataProviderAccess =
CoreServices.SpatialAwarenessSystem as IMixedRealityDataProviderAccess;
if (dataProviderAccess != null)
{
IReadOnlyList<IMixedRealitySpatialAwarenessMeshObserver> observers =
dataProviderAccess.GetDataProviders<IMixedRealitySpatialAwarenessMeshObserver>();
foreach (IMixedRealitySpatialAwarenessMeshObserver observer in observers)
{
// Update the visible material
observer.VisibleMaterial = myMaterial;
}
}
Related
I want to duplicate my GLTF models with different positions/colors dynamically, to do so I have done:
const L_4_G = new Object3D();
...
const multiLoad_4 = (result, position) => {
const model = result.scene.children[0];
model.position.copy(position);
model.scale.set(0.05, 0.05, 0.05);
//
L_4_G.add(model.clone())
scene.add(model);
};
...
function duplicateModel4() {
L_4_G.translateX(-1.2)
L_4_G.translateY(0.0)//0.48
L_4_G.translateZ(1.2)
L_4_G.rotateY(Math.PI / 2);
scene.add(L_4_G);
}
I didn't find out how can I change the Object3D color from the documentation, can you please tell me how can I do that? thanks in advance.
Here is the full code that I'm using, and here are the models
Update
I have seen this solution, to store a set of colors in the object's userData and choose the color later:
L_2_G.userData.colors = {green : #00FF00, red : ..., ...}
L_2_G.children[0].material.color(userData.colors["green"])
But I'm getting an error that children[0] undefined, but I can see that this object has a child and a material, and color via the console: console.log(L_2_G.children), console.log(L_2_G.children.length)--> 0
Also I have tried getObjectByName as explained here:
scene.getObjectByName(name).children[0].material.color.set(color);
which also reslts: children[0] is undefined, scene.getObjectByName(name).children.length is 0.
THREE.Object3D is a base class for anything that can go in a scene graph, including lights, cameras, and empty objects. Not all Object3D instances have geometry or materials. You may be looking for the THREE.Mesh subclass which does have materials and colors.
In general, code like getObjectByName(...) and model = result.scene.children[0] is very content-specific. The file might contain many nested objects, and .children[0] just grabs the first part. It's usually best to traverse the scene graph instead, looking for the objects you want to modify (e.g. looking for all Meshes, or Meshes with a particular name).
const model = result.scene;
model.traverse((object) => {
if (object.isMesh) {
object.material.color.setHex( 0x404040 );
}
});
Then you can either add the entire group to your scene (scene.add(model)), or just add parts of it. Keep in mind that adding meshes to a new parent removes them from their previous parent, and you shouldn't do that while traversing the previous parent. Instead you can make a list of meshes, and add them in a second step:
const meshes = [];
result.scene.traverse((object) => {
if (object.isMesh) {
meshes.push(object);
}
});
for (const mesh of meshes) {
scene.add(mesh);
}
Finally, the position of an object is inherited from its parents. By removing the object from its original parents you might change its position in the scene. If you are planing to assign a new position to the object anyway, that is fine.
im working on the spatial mapping processing for my HoloLens project.
Somehow calling "SpatialSurfaceMesh::TryComputeLatestMeshAsync" keeps returning the same mesh data overtime.
Is there another process involved updating the observer?
void SpatialMapping::AddOrUpdateSurface(winrt::Windows::Perception::Spatial::SpatialCoordinateSystem const& coordinateSystem)
{
using namespace winrt::Windows::Perception::Spatial::Surfaces;
SpatialBoundingBox axisAlignedBoundingBox =
{
{ 0.f, 0.f, 0.f },
{ 50.f, 50.f, 50.f },
};
SpatialBoundingVolume bounds = SpatialBoundingVolume::FromBox(coordinateSystem, axisAlignedBoundingBox);
m_surfaceObserver.SetBoundingVolume(bounds);
m_surfaceObserver.ObservedSurfacesChanged(
winrt::Windows::Foundation::TypedEventHandler
<SpatialSurfaceObserver, winrt::Windows::Foundation::IInspectable>
({ this, &SpatialMapping::Observer_ObservedSurfacesChanged })
);
}
void SpatialMapping::Observer_ObservedSurfacesChanged(winrt::Windows::Perception::Spatial::Surfaces::SpatialSurfaceObserver const& sender
, winrt::Windows::Foundation::IInspectable const& object)
{
{
using namespace winrt::Windows::Perception::Spatial::Surfaces;
const auto mapContainingSurfaceCollection = sender.GetObservedSurfaces();
// Process surface adds and updates?.
for (const auto& pair : mapContainingSurfaceCollection)
{
auto id = pair.Key();
auto info = pair.Value();
InsertAsync(id, info);
}
}
}
Concurrency::task<void> SpatialMapping::InsertAsync(winrt::guid /*const&*/ id, winrt::Windows::Perception::Spatial::Surfaces::SpatialSurfaceInfo /*const&*/ newSurfaceInfo)
{
using namespace winrt::Windows::Perception::Spatial::Surfaces;
return concurrency::create_task([this, id, newSurfaceInfo]
{
const auto surfaceMesh = newSurfaceInfo.TryComputeLatestMeshAsync(m_maxTrianglesPerCubicMeter, m_surfaceMeshOptions).get();
std::lock_guard<std::mutex> guard(m_meshCollectionLock);
m_updatedSurfaces.emplace(id, surfaceMesh);
});
}
Generation works, Update does not
Manuel attempt same problem:
winrt::Windows::Foundation::IAsyncAction SpatialMapping::CollectSurfacesManuel()
{
const auto mapContainingSurfaceCollection = m_surfaceObserver.GetObservedSurfaces();
for (const auto& pair : mapContainingSurfaceCollection)
{
auto id = pair.Key();
auto info = pair.Value();
auto mesh{ co_await info.TryComputeLatestMeshAsync(m_maxTrianglesPerCubicMeter, m_surfaceMeshOptions) };
{
std::lock_guard<std::mutex> guard(m_meshCollectionLock);
m_updatedSurfaces.emplace(id, mesh);
}
}
}
MVCE:
Create a New Project with the template
"Holographic DirectX 11 App (UWP) C++/WinRT)"
Add the files:
https://github.com/lpnxDX/HL_MVCE_SpatialSurfaceMeshUpdateProblem.git
Replace m_main in AppView.h
We did some research and have some thoughts about your question right now, let me explain the findings
Has your Observer_ObservedSurfacesChanged method triggered exactly? Adding output statements or breakpoints can help you check it. Since SurfaceObserver should be always available, usually we need to check the availability of surfaceObserver in each frame and recreate a new one when necessary, sample code snippt please see here.
Have you set m_surfaceMeshOptions? It is not visible in the code you posted. If it is missing, you can configure it with the following statement:
surfaceMeshOptions-> IncludeVertexNormals = true;
Microsoft provided the Holographic spatial mapping sample, shows how to acquire spatial mapping data from Windows Perception in real-time. it is similar to your needs, to narrow down issue if it's an issue with your code, please try to check and run this example on your device
If after the above steps, you still can't solve the problem, could you provide an
MVCE, so that we can locate the issue or find a solution? Be careful to remove any privacy-related or other business function codes.
Your code has the following problems:
TryComputeLatestMeshAsync should be called from Observer_ObservedSurfacesChanged not from concurrency::create_task
TryComputeLatestMeshAsync return mesh with matrix, vertices and indices. Indices should be stored to a safe location at the first run, they don't change later. Vertices and matrix should copied as they returns. You shouldn't save the mesh itself, because its data updates from various threads.
ObservedSurfacesChanged shouldn't be called every frame. This is long running function.
Maybe it has something more. I would recommend to start from the sample mentioned earlier.
As part of work I was told to build a small game in phaser where I have to drag and drop some objects using mouse over some static objects in the scene and make out if the drop was done on the matching static object. The static and the not static objects all have non-rectangular boundaries, so I used Physics editor to draw their boundaries and import them into Phaser. To detect collisions while dragging the object I am using matter callbacks for 'collisionstart' and 'collisionend'. Say for example I am trying to drag an apple onto a tree, as soon as the the apple body collides with the tree outline, 'collisionstart' triggers, but as I move the apple inside the tree boundary 'collisionstart' and 'collisionend' trigger multiple times. So this seems to be an unreliable way to detect overlap between two objects.
Collision code:
var canDrag = this.matter.world.nextGroup();
currentObject = this.matter.add.image(360, 360, 'Carrot', 0, { chamfer: 16 }).setCollisionGroup(canDrag);
currentObject.body.label = 'Carrot';
this.matter.add.mouseSpring({ collisionFilter: { group: canDrag } });
this.matter.world.on('collisionstart', function (event, bodyA, bodyB)
{
if ((bodyA.parent.label == 'Tree') || (bodyB.parent.label == 'Tree')) {
tree.tint = tintColor;
}
});
this.matter.world.on('collisionend', function (event, bodyA, bodyB)
{
if ((bodyA.parent.label == 'Tree') || (bodyB.parent.label == 'Tree')) {
tree.tint = normalColor;
}
});
im developing a game using andangine and the tmx and body2d extensions.
i create Objects(sprites) like coins an specific positions while creating my map.
i use a contactlistener to check if the player collides with a coin.
how can i delete this sprite?
and how can i organize my sprites best?
thanks =)
I assume you create a PhysicsConnector to connect between your sprites and bodies. Create a list of these physics connectors, and when you decide you should remove a body (And its sprite), do the following:
Body bodyToRemove = //Get it from your contact listener
for(PhysicsConnector connector : mPhysicsConnectors) //mPhysicsConnectors is the list of your coins physics connectors.
if(connector.getBody() == bodyToRemove)
removeSpriteAndBody(connector); //This method should also delete the physics connector itself from the connectors list.
About sprites organization: Coins are re-useable sprite, you shouldn't recreate them every time. You can use an object pool, here's a question about this topic.
I advice you to set user data of a body. And in your collision handler you would be able to work with it. Small example:
body.setUserData(...);
..
public void postSolve(Contact contact, ContactImpulse impulse) {
... bodyAType = (...) bodyA.getUserData();
... bodyBType = (...) bodyB.getUserData();
if (bodyAType != null && bodyBType != null) {
if (bodyAType.getUserData.equals(...)) {
//.......do what you need
}
}
}
i am recording video from webcam using DirectshowLib2005.dll in C#.net..i have this code to startVideoRecoding as below..
try
{
IBaseFilter capFilter = null;
IBaseFilter asfWriter = null;
IFileSinkFilter pTmpSink = null;
ICaptureGraphBuilder2 captureGraph = null;
GetVideoDevice();
if (availableVideoInputDevices.Count > 0)
{
//
//init capture graph
//
graphBuilder = (IFilterGraph2)new FilterGraph();
captureGraph = (ICaptureGraphBuilder2)new CaptureGraphBuilder2();
//
//sets filter object from graph
//
captureGraph.SetFiltergraph(graphBuilder);
//
//which device will use graph setting
//
graphBuilder.AddSourceFilterForMoniker(AvailableVideoInputDevices.First().Mon, null, AvailableVideoInputDevices.First().Name, out capFilter);
captureDeviceName = AvailableVideoInputDevices.First().Name;
//
//check saving path is exsist or not;if not then create
//
if (!Directory.Exists(ConstantHelper.RootDirectoryName + "\\Assets\\Video\\"))
{
Directory.CreateDirectory(ConstantHelper.RootDirectoryName + "\\Assets\\Video\\");
}
#region WMV
//
//sets output file name,and file type
//
captureGraph.SetOutputFileName(MediaSubType.Asf, ConstantHelper.RootDirectoryName + "\\Assets\\Video\\" + videoFilename + ".wmv", out asfWriter, out pTmpSink);
//
//configure which video setting is used by graph
//
IConfigAsfWriter lConfig = asfWriter as IConfigAsfWriter;
Guid asfFilter = new Guid("8C45B4C7-4AEB-4f78-A5EC-88420B9DADEF");
lConfig.ConfigureFilterUsingProfileGuid(asfFilter);
#endregion
//
//render the stram to output file using graph setting
//
captureGraph.RenderStream(null, null, capFilter, null, asfWriter);
m_mediaCtrl = graphBuilder as IMediaControl;
m_mediaCtrl.Run();
isVideoRecordingStarted = true;
VideoStarted(m_mediaCtrl, null);
}
else
{
isVideoRecordingStarted = false;
}
}
catch (Exception Ex)
{
ErrorLogging.WriteErrorLog(Ex);
}
if you observe this lines of code
//
//configure which video setting is used by graph
//
IConfigAsfWriter lConfig = asfWriter as IConfigAsfWriter;
Guid asfFilter = new Guid("8C45B4C7-4AEB-4f78-A5EC-88420B9DADEF");
lConfig.ConfigureFilterUsingProfileGuid(asfFilter);
it will apply video setting which is described on that GUID i got this GUID from file located at "C:\windows\WMSysPr9.prx"..
so my question is how create my own video setting with format,resolutions and all?
How to Record video using webcam in black and white mode or in grayscale?
so my question is how create my own video setting with format,resolutions and all?
GUID based profiles are deprecated, though you can still use them. You can build custom profile in code using WMCreateProfileManager and friends (you start with empty profile and add video and/or audio streams at your discretion). This is C++ API, and I suppose that WindowsMedia.NET, a sister project to DirectShowLib you are already using, provides you interface into .NET code.
Windows SDK WMGenProfile sample both shows how to build profile manually and provides you a tool to build it interactively and save into .PRX file you can use in your application.
$(WindowsSDK)\Samples\multimedia\windowsmediaformat\wmgenprofile
How to Record video using webcam in black and white mode or in grayscale?
The camera gives you a picture, then it goes through pipeline up to recording through certain processing. Ability to make it greyscale is not something inherent.
There are two things you might want to think of. First of all, if the camera is capable of stripping color information on capture, you can leverage this. Check it out - if its settings have Saturation slider, then you just put it input minimal value position and the camera gives you greyscale.
In code, you use IAMVideoProcAmp interface for this.
Another option, including if the camera is missing mentioned capability, is to apply post processing filter or effect that converts to greyscale. There is no stock solution for this, and otherwise there are several ways to achieve the effect:
use third party filter that strips color
export from DirectShow pipeline, convert data in code using Color Control Transform DSP (available starting Win Vista) or GDI functions
use Sample Grabber in the streaming pipeline and update image bits directly