ICP registration issue - python-3.x

I use open3d ICP algorithm to register two sets point clouds.
The input were similar two point clouds sets. However the result showed the error value is high and if I use the viewer, it showed the 2 sets were only partially matched.
Could someone tell me how to improve it? thx

If you could provide us some screenshots so we know in more details.
From my experience,
1. bounding box center: Is your point cloud in the center? As far as I know the first is to make the PCs in the center for further ICP registration.
2. Scale: Did you scale your two PCs to same level?
3. Iteration : Some cases needs more iterations to find the match but the trade off is longer processing time. So maybe try to increase the iterations and see how it works.

Related

How to match user height between VR and AR?

I asked this some time ago on GitHub and was asked to move the question here, so I will.
From my experience, starting a game with a Hololens makes the camera start at 0,0,0 in the scene. When starting with an HMD, the head has approximately the correct height, which can also be adjusted in the Mixed Reality Portal if not perfect.
If those two were to meet in a networked environment, one would see the other at his feet or high up in the air when viewed the other way around.
To get those two to meet at eye level, you either raise one up or lower the other down. No matter the case, you need to know by how much.
The Hololens does not have an internal height representation, you could calculate it from the generated spatial mesh at best. The HMD on the other hand does have an information about it's height, a base height even, otherwise I couldn't configure that in the Portal, kneel down etc. and just be the correct height above the floor.
Now the question is, how do I read this base height for the HMD so I can lower the floor to that height, effectively setting the networked parties to eye level?
For now I have to set an arbitrary height of like 1.6 meters, but that's my colleagues standing height. I am about 1.93 meters tall
NeerajW on GitHub wanted to see if he could find an API that returns the Portal default height, but never replied.
With the Hololens 2 joining the community that's now two AR devices that might want to meet VR avatars from around the world.
How do you guys do this?
In your scenario, are you trying to 1-to-1 match the actual user heights? If so, you may wish to place the avatars at eye level (if they are small/hovering objects) or on the floor plane (if they are humanoid).
Since the VR user's cannot actually see the HoloLens user, this may work well. For the HoloLens user it may feel odd, presuming the VR user is visible (in the same room).
In the Holograms 250 academy course (please note this uses the legacy HoloToolkit), the app represents HoloLens users as floating clouds in VR. The VR user was represented as a small figure on an island model to the HoloLens user.
I hope this is helpful.
David
(I also have an inquiry with other members of the team to see if there are other ideas).

Finding all geohashes within two bounding coordinates

I have coordinates, which are assigned a corresponding geohash in my database. Now I want to retrieve all of the coordinates within two bounding coordinates (top right and top left corner). How can I do that properly?
I tried getting the geohash that fits both of those bounding coordinates, but this solution does not work when they are in completely different regions of the world (so they are not sharing anything in common).
Is there a better way to do that?
Thanks for your help
Unfortunately, this isn't something you can do out-of-the-box with datastore / App engine. (There are no built in spatial queries.)
For early prototyping, etc., you can do it the hard way - retrieve all the rows, and discard the ones not meeting your query in code. Obviously, probably not viable with real production data.
See related question Query for Entities Nearby with Geopt for some possible production solutions.

Adding weight ot variables in a line graph Tableau

I have a dataset consisting of calls going to agents (atually 10 of them) per day. These agents can either answer calls or transfer them to a call center. What we are interested in is whether each of these agents answers more calls than he transfers. In order to answer this, I have created a variable for each of these agents:
Answered/Transferred
I am using line graph to depict these variables per agent over time.
Now if this variable is less than 1 then this agent transferred more calls than he received. The problem now is that this is not a safe way to measure the overall impact of transferred calls. This is because the traffic pertaining to agents 1,2,3 is far greater than the one pertaining to agents 5,6,7 and so on. Therefore, I am trying to come up with a way to "weight" the variables I have created before. That is, somehow include the total number of calls reaching each agent (irrespectively of whether they are getting transferred or answered) in my calculations. That means that if an agent is getting 5 calls per day while another guy is getting 5.000 per day then I should find a way to depict this in my graphs.
Do you guys have any ideas?
Easiest would be to drag weight measure to colors and choose something like temperature diverging. Depending on your viz you can also drag weight measure to size, and for example, make bars or lines thicker to show there are more records there.

Node Js: Not getting the details using dataSources with datasets

I tried to get the step count by date wise. When I took the data from google fit using
API:
https://www.googleapis.com/fitness/v1/users/me/dataSources/derived:com.google.step_count.delta:com.google.android.gms:estimated_steps/datasets/1457548200000000000-1457631000000000000&token=1111111111
I can get only limited step count but not all the steps on that date. Why this kind of problem's are occurs to get the google fit data.
Can any one suggest me the better way to get all the data from google fit.
Using derived:com.google.step_count.delta:com.google.android.gms:estimated_steps datasource
will give you varying results depending on the scenario. The cause of this is mainly from the sensors used. Maybe this is the reason why you think that you have limited results.
estimated_steps also takes into account activity, and estimates steps
when there are none. For instance, assume the user walked for 30
minutes, but the hardware step counter only recorded 10 steps. We
know that number is inaccurate so instead we estimate, say 3000 steps
during that time.
This was noted and discussed in this SO post.

Fusion Tables: Polygon not displayed as of certain zoom level

I'm working on a map that shows different population statistics on a rather granular level in Berlin (447 sub-districts).
https://www.google.com/fusiontables/data?docid=1tIAPGaYK1iEWWLANQOupkAqCcPhVauMjdPS1qOs#map:id=3
For some reason, a small number of polygons (3) is not displayed as soon as you zoom into the map (12 or higher).
As the polygons are displayed at the level before, they should have the proper coordinates. I first thought the shapefiles (kmls provided by the local statistics authority) might be buggy, but that does not seem to be the case.
Can anybody explain to me why this happens?
Thank you very much!
Michael
There are two possibilities that I can think of:
it is a complexity problem or a winding direction issue with the polygon. Thread on Fusion Tables Users Group discussing this issue.
it is a complexity issue with the number of "features" on the tile. See Limits in the documentation, it used to be more clearly defined.
Reversing the winding direction of two of the problem polygons seems to fix the issue:
https://www.google.com/fusiontables/DataSource?snapid=S787935DQC4

Resources