Configure Iterative Closest Point PCL - android-ndk

I am having trouble configuring ICP in PCL 1.6(I use Android) and I believe that is the cause of an incorrect transformation matrix. What I've tried so far is this;
Downsample the point clouds I use in ICP with the following code:
pcl::VoxelGrid <pcl::PointXYZ> grid;
grid.setLeafSize(6, 6, 6);
grid.setInputCloud(output);
grid.filter(*tgt);
grid.setInputCloud(cloud_src);
grid.filter(*src);
Try to align the two point clouds with the following code :
pcl::IterativeClosestPoint<pcl::PointXYZ, pcl::PointXYZ> icp;
icp.setInputCloud(tgt);
icp.setInputTarget(src);
pcl::PointCloud<pcl::PointXYZ> Final;
pcl::PointCloud<pcl::PointXYZ> transformedCloud;
icp.setMaxCorrespondenceDistance(0.08);
icp.setMaximumIterations(500);
//icp.setTransformationEpsilon (0.000000000000000000001);
icp.setTransformationEpsilon(1e-12);
icp.setEuclideanFitnessEpsilon(1e-12);
icp.setRANSACIterations(2000);
icp.setRANSACOutlierRejectionThreshold(0.6);
I have tried all kinds of values in the options of ICP but with no success.
The code I use to create the point cloud:
int density = 1;
for (int y = 0; y < depthFrame.getHeight(); y = y + density) {
for (int x = 0; x < depthFrame.getWidth(); x = x + density) {
short z = pDepthRow[y * depthVideoMode.getResolutionX() + x];
if (z > 0 && z < depthStream.getMaxPixelValue()) {
cloud_src->points[counter].x = x;
cloud_src->points[counter].y = y;
cloud_src->points[counter].z = z;
}
}
}
Any chance somebody can help me out configuring ICP?

From what I know, PCL uses meters as a unit of measurement. so, is it intended that leaf size for downsampling is 6*6*6 !? I think it's huge !!
Try with some lower values and if it didn't work either, try ICP with the most simple configuration as follows :
pcl::IterativeClosestPoint<pcl::PointXYZ, pcl::PointXYZ> icp;
icp.setInputCloud(tgt);
icp.setInputTarget(src);
pcl::PointCloud<pcl::PointXYZ> Final;
icp.align(Final);
once, you got this working (and it should be), then start tuning the other parameters and it shouldn't be that hard to do.
Hope that helps, Cheers!

Related

Project Tango Point Cloud strange crash, and dense depth map

I am trying to use Project Tango C API, but the application crashed with no error if number of point cloud are more than ~6.5k (after some testing) with the following code
int width = mImageSource->getDepthImageSize().x;
int height = mImageSource->getDepthImageSize().y;
double fx = mImageSource->calib.intrinsics_d.projectionParamsSimple.fx;
double fy = mImageSource->calib.intrinsics_d.projectionParamsSimple.fy;
double cx = mImageSource->calib.intrinsics_d.projectionParamsSimple.px;
double cy = mImageSource->calib.intrinsics_d.projectionParamsSimple.py;
memset(inputRawDepthImage->GetData(MEMORYDEVICE_CPU), -1, sizeof(short)*width*height);
for (int i = 0; i < XYZ_ij->xyz_count; i++) {
float X = XYZ_ij->xyz[i*3][0];
float Y = XYZ_ij->xyz[i*3][1];
float Z = XYZ_ij->xyz[i*3][2];
if (Z < EPSILON || (X < EPSILON && -X < EPSILON) || (Y < EPSILON && -Y < EPSILON) || X != X || Y != Y || Z != Z)
continue;
int x_2d = (int)(fx*X/Z+cx);
int y_2d = (int)(fy*Y/Z+cy);
if (x_2d >=0 && x_2d < width && y_2d >= 0 && y_2d < height && (x_2d != 0 || x_2d != 0)) {
inputRawDepthImage->GetData(MEMORYDEVICE_CPU)[x_2d + y_2d*width] = (short) (Z*1000);
} else {
continue;
}
}
However, if I use for (int i = 0; i < XYZ_ij->xyz_count && i < 6500; i++) everything works fine. I am just wondering if there is an upper bound for access point cloud with C API or I did something wrong?
(width is 320, height is 180, and other intrinsics are loaded from Tango API)
In addition, Google mentioned to use nearest- neighbor filter to get dense depth map in bottom of this page, is there an interface in Tango API for this? Or would anyone suggest an open source implementation for it.
I am also wondering if there is anyway to "pull" colored image(1280x720) in onXYZijAvailable because I need a dense synchronized colored point cloud. Do I need to apply external matrix to align both coordinate frame, or I only need to subsample color image (assume their coordinate system are the same)?
Thank you for any advice!
In your code that looks up the depth sample coordinates...
for (int i = 0; i < XYZ_ij->xyz_count; i++) {
float X = XYZ_ij->xyz[i*3][0];
float Y = XYZ_ij->xyz[i*3][1];
float Z = XYZ_ij->xyz[i*3][2];
...you should be using an index of i, not i*3. It is a 2D array so you don't have to manage the stride for the higher dimension yourself.
The SDK does not provide a call to fill in locations with no depth samples, probably because there are many approaches with different tradeoffs. The Wikipedia page on nearest neighbor search is a reasonable place to start. There is an interface to FLANN in OpenCV.
The SDK will only deliver the latest color image to you. If you want a prior image (e.g. with a timestamp close to your depth samples) you will have to manage that yourself. Because you can never get a color image at exactly the same timestamp with your depth samples (as the same camera is used in different modes for both), you theoretically should apply the extrinsic pose to align them. In practice if the motion is small over the 0.5 frame time or less between the timestamps, I think most people are going to ignore it.

How to make 161 meter / 528ft circles on with OSMDROID?

I'm using OsmDroid on OpenStreetMaps and can make markers and polylines, but I can't find any examples on how I'd make 161m/528ft circles around a marker.
a) How do I make circles?
b) How do I make them 161m/528ft in size?
Thanks to MKer, I got an idea on how to solve the problem and made this piece of code, which works:
oPolygon = new org.osmdroid.bonuspack.overlays.Polygon(this);
final double radius = 161;
ArrayList<GeoPoint> circlePoints = new ArrayList<GeoPoint>();
for (float f = 0; f < 360; f += 1){
circlePoints.add(new GeoPoint(latitude , longitude ).destinationPoint(radius, f));
}
oPolygon.setPoints(circlePoints);
oMap.getOverlays().add(oPolygon);`
I know this can be optimized. I'm drawing 360 points, no matter what the zoom is!
If you want a "graphical" circle, then you can implement easily your own CircleOverlay, using the DirectedLocationOverlay as a very good starting point.
If you want a "geographical" circle (than will appear more or less as an ellipse), then you can use the OSMBonusPack Polygon, that you will define with this array of GeoPoints:
ArrayList<GeoPoint> circlePoints = new ArrayList<GeoPoint>();
iSteps = (radius * 40000)^2;
fStepSize = M_2_PI/iSteps;
for (double f = 0; f < M_2_PI; f += fStepSize){
circlePoints.add(new GeoPoint(centerLat + radius*sin(f), centerLon + radius*cos(f)));
}
(warning: I translated from a Nominatim piece of code in PHP, without testing)

Finding the local maxima/peaks and minima/valleys of histograms

Ok, so I have a histogram (represented by an array of ints), and I'm looking for the best way to find local maxima and minima. Each histogram should have 3 peaks, one of them (the first one) probably much higher than the others.
I want to do several things:
Find the first "valley" following the first peak (in order to get rid of the first peak altogether in the picture)
Find the optimum "valley" value in between the remaining two peaks to separate the picture
I already know how to do step 2 by implementing a variant of Otsu.
But I'm struggling with step 1
In case the valley in between the two remaining peaks is not low enough, I'd like to give a warning.
Also, the image is quite clean with little noise to account for
What would be the brute-force algorithms to do steps 1 and 3? I could find a way to implement Otsu, but the brute-force is escaping me, math-wise. As it turns out, there is more documentation on doing methods like otsu, and less on simply finding peaks and valleys. I am not looking for anything more than whatever gets the job done (i.e. it's a temporary solution, just has to be implementable in a reasonable timeframe, until I can spend more time on it)
I am doing all this in c#
Any help on which steps to take would be appreciated!
Thank you so much!
EDIT: some more data:
most histogram are likely to be like the first one, with the first peak representing background.
Use peakiness-test. It's a method to find all the possible peak between two local minima, and measure the peakiness based on a formula. If the peakiness higher than a threshold, the peak is accepted.
Source: UCF CV CAP5415 lecture 9 slides
Below is my code:
public static List<int> PeakinessTest(int[] histogram, double peakinessThres)
{
int j=0;
List<int> valleys = new List<int> ();
//The start of the valley
int vA = histogram[j];
int P = vA;
//The end of the valley
int vB = 0;
//The width of the valley, default width is 1
int W = 1;
//The sum of the pixels between vA and vB
int N = 0;
//The measure of the peaks peakiness
double peakiness=0.0;
int peak=0;
bool l = false;
try
{
while (j < 254)
{
l = false;
vA = histogram[j];
P = vA;
W = 1;
N = vA;
int i = j + 1;
//To find the peak
while (P < histogram[i])
{
P = histogram[i];
W++;
N += histogram[i];
i++;
}
//To find the border of the valley other side
peak = i - 1;
vB = histogram[i];
N += histogram[i];
i++;
W++;
l = true;
while (vB >= histogram[i])
{
vB = histogram[i];
W++;
N += histogram[i];
i++;
}
//Calculate peakiness
peakiness = (1 - (double)((vA + vB) / (2.0 * P))) * (1 - ((double)N / (double)(W * P)));
if (peakiness > peakinessThres & !valleys.Contains(j))
{
//peaks.Add(peak);
valleys.Add(j);
valleys.Add(i - 1);
}
j = i - 1;
}
}
catch (Exception)
{
if (l)
{
vB = histogram[255];
peakiness = (1 - (double)((vA + vB) / (2.0 * P))) * (1 - ((double)N / (double)(W * P)));
if (peakiness > peakinessThres)
valleys.Add(255);
//peaks.Add(255);
return valleys;
}
}
//if(!valleys.Contains(255))
// valleys.Add(255);
return valleys;
}

Pattern Recognition for image comparision in .net

Can anybody share code or algorithm(using pattern recognition) for image comparision in .net.
I need to compare 2 images of different resolution and textures and the find the difference . Now i have code to find the difference between 2 images using C#
// Load the images.
Bitmap bm1 = (Bitmap) (Image.FromFile(txtFile1.Text));
Bitmap bm2 = (Bitmap) (Image.FromFile(txtFile2.Text));
// Make a difference image.
int wid = Math.Min(bm1.Width, bm2.Width);
int hgt = Math.Min(bm1.Height, bm2.Height);
Bitmap bm3 = new Bitmap(wid, hgt);
// Create the difference image.
bool are_identical = true;
int r1;
int g1;
int b1;
int r2;
int g2;
int b2;
int r3;
int g3;
int b3;
Color eq_color = Color.Transparent;
Color ne_color = Color.Transparent;
for (int x = 0; x <= wid - 1; x++)
{
for (int y = 0; y <= hgt - 1; y++)
{
if (bm1.GetPixel(x, y).Equals(bm2.GetPixel(x, y)))
{
bm3.SetPixel(x, y, eq_color);
}
else
{
bm1.SetPixel(x, y, ne_color);
are_identical = false;
}
}
}
// Display the result.
picResult.Image = bm1;
Bitmap Logo = new Bitmap(picResult.Image);
Logo.MakeTransparent(Logo.GetPixel(1, 1));
picResult.Image = (Image)Logo;
//this.Cursor = Cursors.Default;
if ((bm1.Width != bm2.Width) || (bm1.Height != bm2.Height))
{
are_identical = false;
}
if (are_identical)
{
MessageBox.Show("The images are identical");
}
else
{
MessageBox.Show("The images are different");
}
//bm1.Dispose()
// bm2.Dispose()
BUT this compare if the 2 images are of same resolution and size.if some shadow is there on one image(but the 2 images are same) it shows the difference between the image..so i am trying to compare using pattern recognition.
As nailxx said, there is no "100% working free code" or something. Some years ago I helped implementing a "face recognition" app, and one of the things we used was "Locale binary patterns". Its not too easy, but it gave quite good results. Find a paper about it here:
Local binary patterns
Edit: I'm afraid I can't find the paper that I have used these days, it was shorter and fixed on the LBP itself and not how to use it with textures.
Your request is a really complex scientific (not even engineering) task.
The basic obvious algorithm is the following:
Somehow select all object on both comparing images.
This part is relatively simple and can be solved in many ways.
Compare all objects. This part is a task for scientists, considering the fact that they can be shifted, rotated, resized, and so on. :)
However, this can be solved in the case of you have a fixed number of entities to recognize. Like "circle", "triangle","rectange","line".

Video Synthesis - Making waves, patterns, gradients

I'm writing a program to generate some wild visuals. So far I can paint each pixel with a random blue value:
for (y = 0; y < YMAX; y++) {
for (x = 0; x < XMAX; x++) {
b = rand() % 255;
setPixelColor(x,y,r,g,b);
}
}
I'd like to do more than just make blue noise, but I'm not sure where to start (Google isn't helping me much today), so it would be great if you could share anything you know on the subject or some links to related resources.
I used to do such kind of tricks in the past. Unfortunately, I don't have the code :-/
You'll be amazed of what effects bitwise and integer arithmetic operators can produce:
FRAME_ITERATION++;
for (y = 0; y < YMAX; y++) {
for (x = 0; x < XMAX; x++) {
b = (x | y) % FRAME_ITERATION;
setPixelColor(x,y,r,g,b);
}
}
Sorry but I don't remember the exact combinations, so b = (x | y) % FRAME_ITERATION;
might actually render nothing beautiful. But, you can try your own combos.
Anyway, with code like the above, you can produce weird patterns and even water-like effects.
Waves are usually done with trig functions (sin/cos) or tables that approximate them.
You can also do some cool water ripples with some simple math. See here for code and an online demo.

Resources