Hololens Re-scanning the environment using spatial understanding - hololens

I'm using spatial understanding to scan the environment and generate the spatial meshes. (According to the example in the HoloToolkit spatial understanding example)
After generating the meshes initially, i want to have an option to re-scan the environment again(removing the old meshes and regenerating new meshes). Is there any possible ways where i can achieve this. Any help would be much appreciated.

You should be able to restart spatial understanding, a place you could start is by deleting the meshes (they are all parented in a gameobject) and deleting the spatialunderstanding components then programmatically recreating them and configuring them.

if you use voice commands from mixedrealitytoolkit-unity (holotookit-unity) you can bind the scanning to a voice command (for example "scan") every time you call scan it should start scanning.
I'm not sure how to delete old meshes completely. but I don't see a point in deleting them. If the environment updated it will delete unnecessary meshes automatically. Im sure there is a command to delete all meshes somewhere in toolkit though. I recommend to check github for that.

Related

The best way to load an openstreetmap .osm in a docker-container

My intentions:
Actually, I intend to:
implement vehicles as containers
simulate/move these containers on the .osm maps-based roads
My viewpoint about the problem:
I have loaded the XML-based .osm file and processed it in python using xml.dom. But I am not satisfied with the performance of loading the .osm file because later on, I will have to add/create more vehicles as containers that will be simulated onto the same road.
Suggestions needed:
This is my first time to solve a problem related to maps. In fact, I need suggestions on how to proceed by keeping in mind, the performance/efficiency, with this set of requirements. Suggestions in terms of implementation will be much appreciated. Thanks in advance!
Simulating lots of vehicles by running lots of docker containers in parallel might work I suppose. Maybeyou're initialising the same image with different start locations etc passed in as ENV vars? As a practical way of doing agent simulations this sounds a bit over-engineered to me, but as an interesting docker experiment it might make sense.
Maybe you'll need a central thing for holding and sharing the state (positions of other vehicles) and serving that back to the multiple agents.
The challenge of loading an .osm file into some sort of database or internal map representation doesn't seem like the hardest part, and because it may be done once on initialisation and imagine it's not the most performance critical part of this.
I'm thinking you'll probably want to do "routing" through the road network (taking account of one ways etc?), giving your agents a purposeful path to follow to a destination. This will get more complicated if you want to model interactions with other agents e.g. you might want to model getting stuck in traffic because other agents are going the same way, and even decisions to re-route because of traffic, so you may want quite a flexible routing system, perhaps self-coded.
But there's lots of open source routing systems which work with OSM data, to at least draw inspiration from. See this list: https://wiki.openstreetmap.org/wiki/Routing#Developers
Popular choices like OSRM are designed to scale up to country size or even global openstreetmap data, but I imagine that's overkill for you (you're probably looking at simulating within a city road network?). Even so. Probably easy enough to get working in a docker container.
Or you might find something lightweight like the code of the JOSM routing plugin easier to embed in your docker image and customize (although I see that's using a library called "JGraphT")
Then working backwards from a calculated route you can calculate interpolated steps along that path which will allow you to make your simulated agents take a step on each iteration (simulated movement)

How do i access the nodes for text in blender?

I was looking for a way to animate text in blender, i found a way but the way it was done used nodes involving the text object, how can i access this set of nodes?
here is my blender screen: (don't pay attention to what i'm doing)
The first image you found shows nodes from the Animation Nodes addon. Note that current versions contain compiled code, if the pre-built releases listed don't match your system you will need to build it yourself.
You may want to try a simpler solution first, the Animated Text addon offers several text animation options.

Building Recognition in Vuforia&Unity

I would like to make a mobile application so that when a user points at a building it will render various information. My problem is that I really don't know if this can be done. I mean the only way is to take an image of the building and upload it as an image target in unity. But what if the image will change over time (vegetation?) or the user points the camera from a different perspective than the one I used?
Is there a way to make this so that the problem mentioned above won't be an issue?
You can approach this in several ways (or even combine them together):
Write a mechanism that will be able to download updated Vuforia datasets from a server into your app in runtime. This way, you can update the building images when you wish, and ensure it will be detected if something changes
Make sure you take enough pictures of the building from enough perspectives. You can use many images of the building in a single Vuforia dataset
Try to find a partial sections or sections of the building that is very likely to remain unchanged.

Animation Sprite

I want to create a 2D sprite that mimic the provided image:
http://a4.mzstatic.com/us/r1000/069/Purple2/v4/e6/0d/73/e60d73a8-6d78-64c2-dd59-9aabb54c7837/mzl.ujapwanw.320x480-75.jpg
and create different face expressions as provided sprites to unity3d in order to create an android application has multiple face expressions with those sprites... so my question... is what exactly the software I might use through out this process ??
Please, let me know the simplest step-by-step procedures, as I am in my first steps in computer graphics.
Thanks a lot.
Image manipulation is what you are looking for. To modify the current image you have and generate other facial expressions from it, you need to be very good at math. Image manipulation is not a basic stuff and I hope you are not new to programming.
Now that you understand that, you need OpenCV to be able to do this. You need to make a wrapper for it in c#. You can get the already made wrapper [here].1 https://www.assetstore.unity3d.com/en/#!/content/21088 .It works on Windows,Mac, Android and iOS and will save you time. Its NOT free but the price is worth it compare to the time you will spend building the wrappers for all platforms.
Once you get this, you can start learning OpenCV from the following link.
http://docs.opencv.org/doc/tutorials/tutorials.html
http://opencv-srf.blogspot.com/
http://shervinemami.info/openCV.html
http://www.cs.iit.edu/~agam/cs512/lect-notes/opencv-intro/opencv-intro.html
If you the Unity plugin I mentioned, you can ask the author of the plugin to help you out if you are tuck.

What would you recommend to do simple 2D Graphics?

I want to build a program that will (as part of what it's doing) display lines organically growing and interacting horizontally across the screen. Here's a sample image, just imagine the lines sprouting from the left and growing to the right:
The lines would look like the lines used on Google Maps Transit Overlay or OnNYTurf's transit pages.
It's a personal project, so I'm open to just about any language and library combination. But I don't know where to start. What have you used in the past to create graphics that are similar to this? What would you recommend? I want it to run on Windows without any extras needed (.Net is fine), and it doesn't have to run elsewhere. I needs to run as an actual program, not javascript in the browser.
There's obviously no 'right' answer to this, but the purpose isn't to start an argument about X better than Y but rather just find a list of graphics toolkits that do simple 2D graphics that people recommend because of their ease of use or community or whatever.
Processing may be just the tool for you.
Like you said, there are many ways to tackle this problem. Me personally, being it is a windows based project, I would go with the .NET based implementation utilizing WPF. There are tutorials on how to use the 2D drawing feature out there ( http://www.wpftutorial.net/DrawOnPhysicalDevicePixels.html for one ) Again, there is no right answer here. I might also pick some new technology and let your project be a mechanism to learn something new, providing you do not have a looming deadline.

Resources