What is MRPT approach to environment changes during robot navigation? - mobile-robot-toolkit

About 2D SLAM libraries in MRPT project (like rbpf-slam & icp-slam libraries using 2D LiDAR),
what happens when environment changes or people walking around the robot?
The library detects these changes in environment and eliminates them? Or applies changes into the map during navigation? Or they are simply considered as sources of errors in position calculation?

If your question scope is SLAM itself only: yes, the changes are integrated into the map as they occur.
The way this happens depends on the metric map type: for pointclouds, you will end up with many "noisy" areas where the dynamic objects moved. For occupancy grids, if there are more observations detecting a given area as "free" than "occupied", in the end it will be marked as "free" (and the other way around).
Obviously, having many dynamic objects during mapping could degrade the quality of localization during SLAM, hence that of the map itself.

Related

HoloLens: Spatial awareness in dynamic environments

My Hololens 2 application requires the system to disregard some basic changes in the environment after a hologram has been placed. Sometimes these changes are in close proximity to the hologram, i.e. the physical surface below the hologram shifts laterally while everything else in the room remains constant, or a physical object is registered with the hologram. Currently, these changes tend to cause my hologram to drift. Should I, after placing the hologram, simply turn off the spatial mesh observer in my MRTK? This is a fundamental issue about spatial awareness that I don't understand; how does spatial awareness works in dynamic environments, particularly when you want to ignore certain aspects that are changing. I appreciate any advice - I'm a clinician not a developer so much of this is new to me.
The following link is probably the place to start to inform yourself Coordinate systems.
Fortunately or rather unfortunately the problem you describe is mentioned in chapter "Headset tracks incorrectly due to dynamic changes in the environment". Apparently there is just the suggestion to use it in a less dynamic environment.
To effectively address this problem, one would probably need an AI algorithm that recognizes and responds to complex changes in space. However, this is only a suggestion.
We recommend that using Spatial Anchor to locate the hologram you placed.
The Spatial Anchor marks important points in the world to ensure that anchored holograms stay precisely in place.
If you are using this technology, you should note the locations where you create anchors should have stable visual features(that is, features that don't change often). For example, the physical surface you mentioned shifting laterally is not a good point to create anchors. But a static position with stable visual features around that physical surface would be a good point.
If you are new to the Spatial Anchor, you can start from here: Spatial Anchor

VTK: display integration point data

I'm working with the VTK library in C++.
I have a mesh given as an unstructured grid and certain data given on integration points of gaussian quadrature on the cells (which was created by an external solver). For the sake of simplicity, let's assume that we talk about scalar data.
I also have a tool which displays VTK data graphically. What I want is to display the mentioned data with that tool, simply as interpolated/extrapolated scalar data on the whole grid.
My question is, is there something native to VTK with which I can give the mesh the scalar data at the integration points in some way and VTK handles the interpolation and extrapolation?
I mean, I could write an algorithm that processes the data, creates a new grid in which the cells do not share nodes (as the extrapolated values might not be continuous there), extrapolate the scalars to those nodes for each cell and then display that. However, by this I take away from the native possibilities of the VTK library (which seems to be quite strong in most other regards) and I don't want to reinvent the wheel anyway.
From https://vtk.org/Wiki/images/7/78/VTK-Quadrature-Point-Design-Doc.pdf, I am aware that there is the vtkQuadratureSchemeDefinition class and I think I know how to handle it, and I noticed vtkQuadraturePointInterpolator, which seems to do the opposite of what I'm searching for - interpolation to the integration points rather than extrapolating from them.
The newest entry in the VTK wiki otherwise seems to be https://vtk.org/Wiki/VTK/VTK_integration_point_support, which seems to be quite old, given that it pleads for the existence of some sort of quadrature point support in general, which currently already exists.
Also there is a question in the VTK mailing list which looks just like my question here:
https://public.kitware.com/pipermail/vtkusers/2013-January/078077.html, which seems to be without an answer.
Likewise, the issue https://gitlab.kitware.com/vtk/vtk/issues/17124 also seems to be about what I want to do, and it might hint at it currently not being possible, but it existing as an issue does not imply that it is not already solved (especially with no asignee to the issue).

How to find out memory layout of your data structure implementation on Linux 64bit machine

In this article, http://cacm.acm.org/magazines/2010/7/95061-youre-doing-it-wrong/fulltext
The author talks about the memory layouts of 2 data structures - The Binary Heap and the B-Heap and compares how one has better memory layout than the other (figures 5 and 6).
I want to get hands on experience on this. I have an implementation of a N-Ary Tree and I want to find out the memory layout of my data structure. What is the best way to come up with a memory layout like the one in the article?
Secondly, I think it is easier to identify the memory layout if it is an array based implementation. If the implementation of a Tree uses pointers then what Tools do we have or what kind of approach is required to map it's memory layout?
Design a code for a data-structure to test
Pre-fill the data-structure under test with significant-values ( 0x00000000, 0x01111111, ... ) highlighting the layout borders & data belonging to data-structure elements
Use debugging tools to view actual live-memory content & layout that the coded data-structure element-under-test uses in-vivo
( be systematic & patient )
Perhaps just traversing the data structure to print element addresses (and sizes if they vary) would give you enough information to feed to for instance graphviz? I'm not sure why did you include the linux-kernel tag. Basic virtual memory mapping happens at page granularity (ignoring huge pages here) so physical vs virtual address don't matter. You can easily do your tests in user space.
I would proceed as follows:
place calls to dump your N-ary trees in the code OR use a GDB script to do it
write a script in your favourite scripting language to group objects into pages (masking lower 12 bits of addresses out gives page id), calculate statistics, see if objects span multiple pages, do whatever you want; output graphviz description file
run graphviz to enjoy the vizualisation
The first thing you need to do is figure out the data you need to represent in graphical format. The memory layout in Poul-Henning Kamp's figures are both the pointer structure, and contiguous virtual memory pages. The former can easily be displayed using a debugging tool like ddd. The latter takes a bit more effort, and there are more ways to accomplish it.
A few ideas...
Write a function to traverse the data structure and print values, compile as scaffold code and run
Add function and call it from a debugger, such as gdb
Write a script to be called from a debugger
Another possibility nobody mentioned yet, would be reading through the specification for the language you're writing the code in. This should generally let you determine the memory layout of the structures in the actual compiled code (C/C++, etc...), neglecting compiler optimization. This can be altered by telling the compiler to lay out the data structures in non-default ways though (alignas, __attribute__(aligned), etc...). You would still need to consider how the memory is allocated from the heap and the operating system.
However, once you have the relevant values, you should be able to use any software you like to convert the data into a graphical format (graphviz, etc...).

Algorithms and techniques for "unclustering" map features with OpenLayers

I'm working with OpenLayers (though the advice I need is more general) to create an "unclustering" behavior.
Instead of collapsing multiple features that are close to each other at a given zoom level into a single feature, I want to do the opposite: Push those features apart so each can be visually legible. For my application (a traceroute visualisation app), the precise location of a feature when zoomed out is basically irrelevant, but seeing each feature (and label) clearly, is of utmost importance.
For two features, the technique seems trivial (find the line defined by the two features and push away along that line). But what about more than two features? My geometry is weak -- I feel like this must be a solved problem with a simple solution, but I don't know what magic words to use to even start my research. Where should I be looking to learn how to do this? What techniques are known to be fast and stable?
Bonus points if you can point me to resources that will help not only with pushing the features away, but with changing label position to keep the features as close to their correct geographic position.

Is there a visual two-dimensional code editor?

Let me explain what I mean by "two-dimensional code editor": imagine of using Inkscape or Gimp in a big canvas (say infinite). The "T - add text" tool is used to write the code. Additionally, all function definitions will be framed and links will connect the called functions.
In other words: you have a very large sheet of (virtual) paper where you can write.
It would be really useful. I don't want to write code as a long list of lines, especially now that big monitors are cheaper.
Is such a code editor out there?
What's your opinion? Would you use a 2d code editor?
I've written 3 or 4 visual editors and my second one worked like this, that was for java and c++ (never published, though I did use it for some published research work)
I still don't like much to write my code 'as a long list of lines'. My point is, after trying a system like this, I tried a windowed system (class outlines in windows, right click to open code editors), then a tree based system...
in the long run (I wrote several apps using all of those), the tree based system with non overlapping windows felt at once most scalable (to different monitor sizes) and foremost, most productive, because dragging the text boxes and links and/or windows in the first version was necessary, without adding much to the programming experience, so it felt wasteful.
If you want to try some of this stuff out, you can google antegram for java (java only) antegram for web (javascript/php/actionscript) and ee-ide (on oogtech.org). I'm not sure if I could dig up the original c++/java textbox + links editor (which could collapse graphs as well, and had an infinite canvas, so pretty close to what you describe).
I'm not working on this as much as I used to as few programmers ever seemed to like it except me, but if you like working the tree way, or feel like adding stuff for your own purposes, ee-ide would be the way to go, as it's nicely modular and easy to extend compared to the rest.
On the commercial side, you can configure visual studio to work with UML-like diagrams. I have a feel it might be a little too heavy (although it's definitely more coding than UML oriented), but I'm not sure, I haven't really tried yet.
This probably doesn't answer your question exactly, but anyway.
Have a look at the NodeBox beta . It is a visual programming environment mostly for creating generative graphics. You can program and edit the nodes with python code, connect and reuse them in multiple ways. (Windows and Mac OS)
Also worth mentioning (in terms of concept) is Field . It is for programming performances and arranges bits of code on a stage/timeline. Very interesting but also very confusing. (Mac OS only)
Third one is vvvv. It is used a lot by graphical artists to create realtime 3d visuals. Node based. (Windows only)
NodeBox and Field are open-source, so if you are looking to create something yourself you can see how it's done there.
Check this out. I came across it today and remembered this question.
Code Bubbles
Developers spend significant time
reading and navigating code fragments
spread across multiple locations. The
file-based nature of contemporary IDEs
makes it prohibitively difficult to
create and maintain a simultaneous
view of such fragments. We propose a
novel user interface metaphor for code
understanding and maintanence based on
collections of lightweight, editable
fragments called bubbles, which form
concurrently visible working sets.
The essential goal of this project is
to make it easier for developers to
see many fragments of code (or other
information) at once without having to
navigate back and forth. Each of these
fragments is shown in a bubble.
A bubble is a fully editable and
interactive view of a fragment such as
a method or collection of member
variables. Bubbles, in contrast to
windows, have minimal border
decoration, avoid clipping their
contents by using automatic code
reflow and elision, and do not overlap
but instead push each other out of the
way. Bubbles exist in a large,
pannable 2-D virtual space where a
cluster of bubbles comprises a
concurrently visible working set.
Bubbles support a lightweight grouping
mechanism, and further support
connections between them.
A quantiative user study indicates
that Code Bubbles increased
performance significantly for two
controlled code understanding tasks. A
qualitative user study with 23
professional developers indicates
substantial interest and enthusiasm
for the approach, despite the radical
departure from what developers are
used to.
http://www.cs.brown.edu/people/acb/codebubbles_site.htm
At one point, LabView had a programming mode like this. You connected program blocks together in a graphical way.
It's been so long since I've used LabView that I don't know if it is still the same.
For me, the MVVM pattern means that there's no code behind the UI controls anyway. The logic is all in a class with properties.
The properties use WPF databinding to update the UI controls. For example, on the form or window, page, whatever, MySearchButton.IsEnabled is bound to ViewModel.MySearchButtonIsEnabled property. So the app logic runs in the ViewModel class and just sets its own properties and the UI updates automatically.
Although this is specific to MS WPF the pattern actually stems from SmallTalk and is found across the development field as MVP. Without WPF one would need to write the databinding or 'presenter' logic, which is common.
This means the UI can be torn off and a new one pasted-in really quickly and with little code knowledge from the UI guy - who, in an ideal world, is a crack creative guy that drives a 70s Citroen.
So my point is that, although it sounds like a neat innovation, a 2D editor like this would be assisting a coding style that is no longer considered optimal.

Resources