BFS/IDS Search with Geometric shapes - search

I have a problem I am working on involving breadth first search and iterative deepening search. I understand the search mechanisms for trees but I don't understand how to apply it to grid and geometric shapes. If I wanted to perform a BFS(breadth first search) how would I apply it to this problem where I have to move the pieces so that they would fit perfectly in the square on the right hand side. My attempt is to first take two pieces and place them in the square and then branch out from each side. The problem is, there are many ways I can place the pieces in level 1 of the BFS tree. By looking at the image I know the solution but do not know how I would go about it in terms of the searches

I am going to assume from your post history that we are enrolled in the same class and have the same assignment due on Monday.
The way I thought to approach this problem is:
Case 0 is the empty board
Case 1 is the multitude of different position that a shape such as the 3x1 could fit in the rectangle
Case 2 is the multitude of different positions that another shape such as the U shaped one could fit in while taking the 3x1 into account.
As you go on, some shapes aren't going to fit anymore so those branches aren't prolonged anymore.
I haven't figure it out fully, if you want to ponder upon it further or if you have figured out another way to do this, I guess we could 'team up' and try to figure it out.

Related

Finding powerlines in LIDAR point clouds with RANSAC

I'm trying to find powerlines in LIDAR points clouds with skimage.measures ransac() function. This is my very first time meddling with these modules in python so bear with me.
So far all I knew how to do reliably was filtering low or 'ground' points from the cloud to reduce the number of points to deal with.
def filter_Z(las, threshold):
filtered = laspy.create(point_format = las.header.point_format, file_version = las.header.version)
filtered.points = las.points[las.Z > las.Z.min() + threshold]
print(f'original size: {len(las.points)}')
print(f'filtered size: {len(filtered.points)}')
filtered.write('filtered_points2.las')
return filtered
The threshold is something I put in by hand since in the las files I worked with are some nasty outliers that prevent me from dynamically calculating it.
The filtered point cloud, or one of them atleast looks like this:
Note the evil red outliers on top, maybe they're birds or something. Along with them are trees and roofs of buildings. If anyone wants to take a look at the .las files, let me know. I can't put a wetransfer link in the body of the question.
A top down view:
I've looked into it as much as I could, and found the skimage.measure module and the ransac function that comes with it. I played around a bit to get a feel for it and currently I'm stumped on how to continue.
def ransac_linefit_sklearn(points):
model_robust, inliers = ransac(points, LineModelND, min_samples=2, residual_threshold=1000, max_trials=1000)
return model_robust, inliers
The result is quite predictable (I ran ransac on a 2D view of the cloud just to make it a bit easier on the pc)
Using this doesn't really yield any good results in examples like the one I posted. The vegetation clusters have too many points and the line is fitted through it because it has the highest point density.
I tried DBSCAN() to cluster up the points but it didn't work. I also attempted OPTICS() but as I write it still hasn't finished running.
From what I've read on various articles, the best course of action would be to cluster up the points and perform RANSAC on each individual cluster to find lines, but I'm not really sure on how to do that or what clustering method to use in situations like these.
One thing I'm also curious about doing is just filtering out the big blobs of trees that mess with model fititng.
Inadequacy of RANSAC
RANSAC works best whenever your data fits a mono-modal distribution around your model. In the case of this point cloud, it works best whenever there is only one line with outliers, but there are at least 5 lines when viewed birds-eye. Check out this older SO post that discusses your problem. Francesco's response suggests an iterative RANSAC based approach.
Octrees and SVD
Colleagues worked on a similar problem in my previous job. I am not fluent in the approach, but I know enough to provide some hints.
Their approach resembled Francesco's suggestion. They partitioned the point-cloud into octrees and calculated the singular value decomposition (SVD) within each partition. The three resulting singular values will correspond to the geometric distribution of the data.
If the first singular value is significantly greater than the other two, then the points are line-like.
If the first and second singular values are significantly greater than the other, then the points are plane-like
If all three values are of similar magnitude, then the data is just a "glob" of points.
They used these rules iteratively to rule out which points were most likely NOT part of the lines.
Literature
If you want to look into published methods, maybe this paper is a good starting point. Power lines are modeled as hyperbolic functions.

Manual Triangulation in Point Cloud Library

I have a point cloud, and I have performed a plane detection. Now I want to triangulate the scene.
I already have the triangulation of each plane, which looks like this :
I want to use Point Cloud Library GreedyProjectionTriangulation in order to reconstruct the scene. So I want to adapt the different functions which intervene in the reconstruction.
I dug in the code of gp3.h and gp3.hpp (which can be found in pcl/surface/include/pcl/surface) and read the associated publication. So far I have come to this :
Every point of my planes should be marked as fringe at the very beginning - and it is easy to do so with the vector state_.
We add triangles of the planes with the function addTriangles, no problem with this.
I don't know how to enforce the edges. There is a doubleEdges vector, but I didn't really understand how it worked. It seems that it is reseted for every iteration on a point.
I have to push the points of my planes in the fringe_queue_ vector, but the addFringe function is weird, since it asks for 2 arguments and I don't understand why.
I didn't understand what the vector part_ was for.
My current result is this :
It is not very clear on the image, but since I don't know how to enforce edges, I have issues of overlapping triangles.
EDIT
I continue to dig in the code. I identified what the crucial part may be. To avoid a wall of code, you can find the interesting part here - it is approximately between line 180 and 285 in gp3.hpp.
I can't understand what sfn_ and ffn_ are for. My intuition is that sfn_[R_] returns the second fringe neighbor of R_, and ffn_[R_] returns the first fringe neighbor of R_. So something like this :
If I'm right, I can easily to do this, since I have the contour of my plane sorted in the right order.
I still don't know how to enforce edges of my triangles belonging to my plane. Looking at the code, I think the key is in the doubleEdges vector, but I don't know how to modify this part to make it relevant for my problem.

AI search - build a rectangle from 12 tetris shapes how many states are possible?

You have 12 shapes:
which you can make each out of five identical squares.
You need to combine the 12 pieces to one rectangle.
You can form four different rectangles:
2339 solutions (6x10), 2 solutions (3x20), 368 solutions (4x15), 1010 solutions (5x12).
I need to build the 3X20 rectangle:
My question what is the maximum number of states (i.e., the branching factor) that is possible?
My half way calculation:
The way I see it, there are 4 operations on each shape: turn 90/180/270 degrees and mirroring (turning it upside down).
Then, you have to put the shape on the board, somewhere on the 3X20 board.
Illegal states will be one that the shape doesn't fit in the board, but they are still states.
For the first move, you can chose each shape in 4 ways which is 4X12 ways, and then you need to multiply in the number of positions the shape can be in, and that is the number of states you have. But how can I calculate the number of positions?
Please help me with this calculation it is very important, it is not some kind of homework which I'm trying to avoid.
I think there is no easy & 'intelligent' way to list solutions (or states) to pentomino puzzles. You have to try all possibilities. Recursive programming or backtracking is the way to do it. You should check this solution that also has java source code available. Hopefully that points you to the right direction.
There is also a python solution that is perhaps more readable.

Recognizing line segments from a sequence of points

Given an input of 2D points, I would like to segment them in lines. So if you draw a zig-zag style line, each of the segments should be recognized as a line. Usually, I would use OpenCV's
cvHoughLines or a similar approach (PCA with an outlier remover), but in this case the program is not allowed to make "false-positive" errors. If the user draws a line and it's not recognized - it's ok, but if the user draws a curcle and it comes out as a square - it's not ok. So I have an upper bound on the error - but if it's a long line and some of the points have a greater distance from the approximated line, it's ok again. Summed up:
-line detection
-no false positives
-bounded, dynamically adjusting error
Oh, and the points are drawn in sequence, just like hand drawing.
At least it does not have to be fast. It's for a sketching tool. Anyone has an idea?
This has the same difficulty as voice and gesture recognition. In other words, you can never be 100% sure that you've found all the corners/junctions, and among those you've found you can never be 100% sure they are correct. The reason you can't be absolutely sure is because of ambiguity. The user might have made a single stroke, intending to create two lines that meet at a right angle. But if they did it quickly, the 'corner' might have been quite round, so it wouldn't be detected.
So you will never be able to avoid false positives. The best you can do is mitigate them by exploring several possible segmentations, and using contextual information to decide which is the most likely.
There are lots of papers on sketch segmentation every year. This seems like a very basic thing to solve, but it is still an open topic. The one I use is out of Texas A&M, called MergeCF. It is nicely summarized in this paper: http://srlweb.cs.tamu.edu/srlng_media/content/objects/object-1246390659-1e1d2af6b25a2ba175670f9cb2e989fe/mergeCF-sbim09-fin.pdf.
Basically, you find the areas that have high curvature (higher than some fraction of the mean curvature) and slow speed (so you need timestamps). Combining curvature and speed improves the initial fit quite a lot. That will give you clusters of points, which you reduce to a single point in some way (e.g. the one closest to the middle of the cluster, or the one with the highest curvature, etc.). This is an 'over fit' of the stroke, however. The next stage of the algorithm is to iteratively pick the smallest segment, and see what would happen if it is merged with one of its neighboring segments. If merging doesn't increase the overall error too much, you remove the point separating the two segments. Rinse, repeat, until you're done.
It has been a while since I've looked at the new segmenters, but I don't think there have been any breakthroughs.
In my implementation I use curvature median rather than mean in my initial threshold, which seems to give me better results. My heavily modified implementation is here, which is definitely not a self-contained thing, but it might give you some insight. http://code.google.com/p/pen-ui/source/browse/trunk/thesis-code/src/org/six11/sf/CornerFinder.java

visibility of objects in multi-dimensional space based on a specific perspective

I'm working on a data mining algorithm that considers features in their n-dimensional feature-space and allows surrounding training examples to block the 'visibility' of other training examples effectively taking them out of the effective training set for this particular query.
I've been trying to find an efficient way to determine which points are 'visible' to the query. I though the realm of computer graphics might offer some insight but there is a lot of information to peruse and much of it either can't be generalized to multiple dimensions or is only efficient when the number of dimensions is low.
I was hoping I could get some pointers from those of you who are more intimately knowledgeable in the domain.
The solution I found is to convert the euclidean coordinates into 'hyper-spherical' coordinates. Its similar to the spherical coordinate system except you add an additional angle with a range [0, pi) for each additional dimension beyond three.
After that I can sort the list of points based on their distance from the origin and iterate through comparing each point in the list to the first item looking for angles that overlap. after each iteration you remove the first item in the list and any items that were discovered to have been blocked. then start over with the new first item (closest item).
Don't know if anyone will ever find this useful but I thought I should put the answer up anyways.

Resources