HoloLens: Spatial awareness in dynamic environments - hololens

My Hololens 2 application requires the system to disregard some basic changes in the environment after a hologram has been placed. Sometimes these changes are in close proximity to the hologram, i.e. the physical surface below the hologram shifts laterally while everything else in the room remains constant, or a physical object is registered with the hologram. Currently, these changes tend to cause my hologram to drift. Should I, after placing the hologram, simply turn off the spatial mesh observer in my MRTK? This is a fundamental issue about spatial awareness that I don't understand; how does spatial awareness works in dynamic environments, particularly when you want to ignore certain aspects that are changing. I appreciate any advice - I'm a clinician not a developer so much of this is new to me.

The following link is probably the place to start to inform yourself Coordinate systems.
Fortunately or rather unfortunately the problem you describe is mentioned in chapter "Headset tracks incorrectly due to dynamic changes in the environment". Apparently there is just the suggestion to use it in a less dynamic environment.
To effectively address this problem, one would probably need an AI algorithm that recognizes and responds to complex changes in space. However, this is only a suggestion.

We recommend that using Spatial Anchor to locate the hologram you placed.
The Spatial Anchor marks important points in the world to ensure that anchored holograms stay precisely in place.
If you are using this technology, you should note the locations where you create anchors should have stable visual features(that is, features that don't change often). For example, the physical surface you mentioned shifting laterally is not a good point to create anchors. But a static position with stable visual features around that physical surface would be a good point.
If you are new to the Spatial Anchor, you can start from here: Spatial Anchor

Related

Tracking the top of heads with Kinect

I was wondering if there was an existing API for tracking the top of people heads with the Kinect. e.g., the Kinect is facing downwards from a ceiling.
If not, how might I implement such a thing with its depth data.
No. The Kinect expects to be facing a standing (or seated, given the appropriate flag) human. All APIs (official or 3rd party) that have a notion of skeleton tracking expect this.
If you wish you track someone from above, you will need to use a library such as OpenCV (or EmguCV, for C# development). Well, you don't have to, but they offer utilities to help with computer vision and image processing. These libraries don't care if you are using a Kinect or just a regular RGB camera.
Using the Kinect from above, you could use the depth data to help locate and track blobs. With the Kinect at a known distance from the floor, have a few people walk under it and see what z-coordinates you get out of it -- you can then assume that anything within a certain z-coordinate range is a person walking across the screen (vs. a cat, or something else).
You will need to use standard image processing techniques (see OpenCV reference above) to initially find the blobs within the image. Once found, the depth data from the Kinect might be useful but I think you'll find it isn't ultimately necessary if you're just watching people walk across the floor.
We built a Kinect-driven experience where the sensors had to point downward to detect users walking along a wall. We used openTSPS to do all the work of taking the camera input and doing blob detection and handing off tracked "persons" to (in our case) a Processing app. It works really well for us.
http://opentsps.com/

Software to visually represent telecommunications networks

I am working on some software tools used for the design and optimization of telecoms networks: routing, capacity allocation and topology.
To represent the network nodes and interconnecting links I currently use standard MFC calls to draw things like lines and ellipses using mouse clicks, menu commands and so on.
For the time being this has been an adequate means of graphically representating what the
network looks like, as I have been more concerned with getting the underlying algorithms right, improving efficiencies and so on.
At some stage I will want to improve the look and feel of the software. Is anyone aware of any GUI software that is particularly suited to this purpose, open source or otherwise, that would be suitable for the network building stage? The intention is to use something that is more slick than what I am currently doing, especially when it comes to (say) dragging nodes into the drawing area and setting their properties. Also of interest would be graphics to display bar charts representing link utilization levels.
Thanks in anticipation.
For looks, I think you must go for WPF. But the migration from MFC wont be that easy if you have a huge code base. Although GDI is said to be good, performance wise.

How to make colours on one screen look the same as another

Given two seperate computers, how could one ensure that colours are being projected roughly the same on each screen?
IE, one screen might have 50% brightness more than another, so colours appear duller on one screen. One artist on one computer might be seeing the pictures differently to another, it's important they are seeing the same levels.
Is there some sort of callibration technique via software you can do? Any techniques? Or is a hardware solution the only way?
If you are talking about lab-critical calibration (that is, the colours on one monitor need to exactly match the colours on another, and both need to match an external reference as closely as possible) then a hardware colorimeter (with its own appropriate software and test targets) is the only solution. Software solutions can only get you so far.
The technique you described is a common software-only solution, but it's only for setting the gamma curves on a single device. There is no control over the absolute brightness and contrast; you are merely ensuring that solid colours match their dithered equivalents. That's usually done after setting the brightness and contrast so that black is as black as it can be and white is as white as it can be, but you can still distinguish not-quite-black from black and not-quite-white from white. Each monitor, then, will be optimized for its own maximum colour gamut, but it will not necessarily match any other monitor in the shop (even monitors that are the same make and model will show some variation due to manufacturing tolerances and age/use). A hardware colorimeter will (usually) generate a custom colour profile for the device under test as it is at the time of testing, and there is generally and end-to-end solution built into the product (so your scanner, printer, and monitor are all as closely matched as they can be).
You will never get to an absolute end-to-end match in a complete system, but hardware will get you as close as you can get. Software alone can only get you to a local maximum for the device it's calibrating, independent of any other device.
What you need to investigate are color profiles.
Wikipedia has some good articles on this:
https://en.wikipedia.org/wiki/Color_management
https://en.wikipedia.org/wiki/ICC_profile
The basic thing you need is the color profile of the display on which the color was seen. Then, with the color profile of display #2, you can take the original color and convert it into a color that will look as close as possible (depends on what colors the display device can actually represent).
Color profiles are platform independent and many modern frameworks support them directly.
You may be interested in reading about how Apple has dealt with this issue:
Color Programming Topics
https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/DrawColor/DrawColor.html
You'd have to allow or ask the individual users to calibrate their monitors. But there's enough variation across monitors - particularly between models and brands - that trying to implement a "silver bullet" solution is basically impossible.
As #Matt Ball observes calibrating your monitors is what you are trying to do. Here's one way to do it without specialised hardware or software. For 'roughly the same' visual calibration against a reference image is likely to be adequate.
Getting multiple monitors of varying quality/brand/capabilities to render a given image the same way is simply not possible.
IF you have complete control over the monitor, video card, calibration hardware/software, and lighting used then you have a shot. But that's only if you are in complete control of the desktop and the environment.
Assuming you are just accounting for LCDs, they are built different types of panels with a host of different capabilities. Brightness is just one factor (albeit a big one). Another is simply the number of colors they are capable of rendering.
Beyond that, there is the environment that the monitor is in. Even assuming the same brand monitor and calibration points, a person will perceive a different color if an overhead fluorescent is used versus an incandescent placed next to the monitor itself. At one place I was at we had to shut off all the overheads and provide exact lamp placement for the graphic artists. Picky picky. ;)
I assume that you have no control over the hardware used, each user has a different brand and model monitor.
You have also no control over operating system color profiles.
An extravagant solution would be to display a test picture or pattern, and ask your users to take a picture of it using their mobile or webcam.
Download the picture to the computer, and check whether its levels are valid or too out of range.
This will also ensure ambient light at the office is appropiate.

Algorithms and techniques for "unclustering" map features with OpenLayers

I'm working with OpenLayers (though the advice I need is more general) to create an "unclustering" behavior.
Instead of collapsing multiple features that are close to each other at a given zoom level into a single feature, I want to do the opposite: Push those features apart so each can be visually legible. For my application (a traceroute visualisation app), the precise location of a feature when zoomed out is basically irrelevant, but seeing each feature (and label) clearly, is of utmost importance.
For two features, the technique seems trivial (find the line defined by the two features and push away along that line). But what about more than two features? My geometry is weak -- I feel like this must be a solved problem with a simple solution, but I don't know what magic words to use to even start my research. Where should I be looking to learn how to do this? What techniques are known to be fast and stable?
Bonus points if you can point me to resources that will help not only with pushing the features away, but with changing label position to keep the features as close to their correct geographic position.

Antialiasing alternatives

I've seen antialiasing on Windows using GDI+, Java and also that provided by Photoshop and Gimp. Are there any other libraries out there which provide antialiasing facility without depending on support from the host OS?
Antigrain Geometry provides anti-aliased graphics in software.
As simon pointed out, the term anti-aliasing is misused/abused quite regularly so it's always helpful to know exactly what you're trying to do.
Since you mention GDI, I'll assume you're talking about maintaining nice crisp edges when you resize them - so something like a character in a font looks clean and not pixelated when you resize it 2x or 3x it's original size. For these sorts of things I've used a technique in the past called alpha-tested magnification - you can read the whitepaper here:
http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf
When I implemented it, I used more than one plane so I could get better edges on all types of objects, but it covers that briefly towards the end. Of all the approaches (that I've used) to maintain quality when scaling vector images, this was the easiest and highest quality. This also has the advantage of being easily implemented in hardware. From an existing API standpoint, your best bet is to use either OpenGL or Direct3D - that being said, it really only requires bilinear filtered and texture mapped to accomplish what it does, so you could roll your own (I have in the past). If you are always dealing with rectangles and only need to do scaling it's pretty trivial, and adding rotation doesn't add that much complexity. If you do roll your own, make sure to pay particular attention to subpixel positioning (how you resolve pixel positions that do not fall on a full pixel, as this is critical to the quality and sometimes overlooked.
Hope that helps!
There are (often misnamed, btw, but that's a dead horse) many anti-aliasing approaches that can be used. Depending on what you know about the original signal and what the intended use is, different things are most likely to give you the desired result.
"Support from the host OS" is probably most sensible if the output is through the OS display facilities, since they have the most information about what is being done to the image.
I suppose that's a long way of asking what are you actually trying to do? Many graphics libraries will provide some form of antialiasing, whether or not they'll be appropriate depends a lot on what you're trying to achieve.

Resources