Best heuristic for A* search - search

I'm implementing the Ricochet Robots game using the A* search. The goal of the game is to put a specific robot in a specific location of the board. The board can have several walls and there are 3 more robots that can be moved.
I'm using the manhattan distance as the heuristic, but the search is not working in all scenarios I try, sometimes I get an infinite loop. I believe it is due to the fact that there are obstacles in the board.
What would be the best heuristic for this case?
This is the code for the a* search function. It receives the heuristic function as input. The node is an object that has the current state and the current board.
def astar_search(problem, h, display=False):
h = memoize(h or problem.h, 'h')
return best_first_graph_search(problem, lambda n: n.path_cost + h(n), display)
def best_first_graph_search(problem, f, display=False):
f = memoize(f, 'f')
node = Node(problem.initial)
frontier = PriorityQueue('min', f)
frontier.append(node)
explored = set()
while frontier:
node = frontier.pop()
if problem.goal_test(node.state):
if display:
print(len(explored), "paths have been expanded and", len(frontier), "paths remain in the frontier")
return node
explored.add(node.state)
for child in node.expand(problem):
if child.state not in explored and child not in frontier:
frontier.append(child)
elif child in frontier:
if f(child) < frontier[child]:
del frontier[child]
frontier.append(child)
return None

The heuristic used by A* must never overestimate the cost. Since it's possible to move an arbitrary distance using one move in Ricochet Robots, Manhatten Distance will not work as a heuristic.
The only valid heuristic I can think of is "2 if not on the same row+column, otherwise 1 if not the end goal", since diagonal moves are not possible.

Related

Running time for a shortestP alg in unweighted undirected graph in python3

For this kind of problem I thought it would have been better make some BFS-like implementation. I don't know why, after some running time testing, it came out a plot that resembles an exponential function.
So I began to think of my code's correctness: is it really efficient? Can you help me to make a running time analysis for a good algorithm?
I've plotted the log in base 1.5 for the x-axis (since in the code I use a list of the first 30 powers of 1.5 as number of vertices input in a random graph generetor). Still looks exponential...
def bfs_short(graph, source, target):
visited = set()
queue = collections.deque([source])
d={}
d[source]=0
while queue:
u = queue.pop()
if u==target:
return d[target]
if u not in visited:
visited.add(u)
for w in graph[u]:
if w not in visited:
queue.appendleft(w)
d[w]=d[u]+1
The thing is... I didn't posted also the benching input trials which also may cause problems but first of all I want to be sure that the code works fine... solving the problems related to testing is my final purpose.
A problem in your code is that your queue does not take in account the distance to the origin in order to prioritize the next vertex it will pop.
Also, collections.deque is not a priority queue, and thus will not give you the closest unvisited vertex seen so far when you pop an element from it.
This should do it, using heapq, a built-in heap implementation:
import heapq
def bfs_short(graph, source, target):
visited = set()
queue = [(0, source)]
heapq.heapify(queue)
while not queue:
dist, vertex = heapq.heappop(queue)
if vertex == target:
return dist
if vertex not in visited:
visited.add(vertex)
for neighbor in graph[vertex]:
if neighbor not in visited:
heapq.heappush(queue, (dist + 1, neighbor))
In this version, pairs are pushed in the queue. To evaluate and compare tuples, Python tries to compare their first element, then the second in case of equality, and on and on.
So by pushing dist(vertex, origin) as first member of the tuple, the smallest couple (dist, vertex) in the heap will also be the closest to the origin vertex.

"unique" crossover for genetic algorithm - TSP

I am creating a Genetic Algorithm to solve the Traveling Salesman Problem.
Currently, two 2D lists represent the two parents that need to be crossed:
path_1 = np.shuffle(np.arange(12).reshape(6, 2))
path_2 = np.arange(12).reshape(6,2)
Suppose each element in the list represents an (x, y) coordinate on a cartesian plane, and the 2D list represents the path that the "traveling salesman" must take (from index 0 to index -1).
Since the TSP requires that all points are included in a path, the resulting child of this crossover must have no duplicate points.
I have little idea as to how I can make such crossover and have the resulting child representative of both parents.
You need to use an ordered crossover operator, like OX1.
OX1 is a fairly simple permutation crossover.
Basically, a swath of consecutive alleles from parent 1 drops down,
and remaining values are placed in the child in the order which they
appear in parent 2.
I used to run TSP with these operators:
Crossover: Ordered Crossver (OX1).
Mutation: Reverse Sequence Mutation (RSM)
Selection: Roulette Wheel Selection
You can do something like this,
Choose half (or any random number between 0 to (length - 1)) coordinates from one parent using any approach, lets say where i % 2 == 0.
These can be positioned into the child using multiple approaches: either randomly, or all in the starting (or ending), or alternate position.
Now the remaining coordinated need to come from the 2nd parent for which you can traverse in the 2nd parent and if the coordinate is not chosen add it in the empty spaces.
For example,
I am choosing even positioned coordinated from parent 1 and putting it in even position indices in the child and then traversing in parent 2 to put the remaining coordinated in the odd position indices in the child.
def crossover(p1, p2, number_of_cities):
chk = {}
for i in range(number_of_cities):
chk[i] = 0
child = [-1] * number_of_cities
for x in range(len(p1)):
if x % 2 == 0:
child[x] = p1[x]
chk[p1[x]] = 1
y = 1
for x in range(len(p2)):
if chk[p2[x]] == 0:
child[y] = p2[x]
y += 2
return child
This approach preserves the order of cities visited from both parents.
Also since it is not symmetric p1 and p2 can be switched to give 2 children and the better (or both) can be chosen.

comparison of binary and ternary search

Considering this post which says: "big O time for Ternary Search is Log_3 N instead of Binary Search's Log_2 N"
this should be because classical ternary search would require 3 comparisons instead of two, but would this implementation be any more inefficient than a binary search?
#!/usr/bin/python3
List = sorted([1,3,5,6,87,9,56, 0])
print("The list is: {}".format(List))
def ternary_search(L, R, search):
"""iterative ternary search"""
if L > R:
L, R = R, L
while L < R:
mid1 = L + (R-L)//3
mid2 = R - (R-L)//3
if search == List[mid1]:
L = R = mid1
elif search == List[mid2]:
L = R = mid2
elif search < List[mid1]:
R = mid1
elif search < List[mid2]:
L = mid1
R = mid2
else:
L = mid2
if List[L] == search:
print("search {} found at {}".format(List[L], L))
else:
print("search not found")
if __name__ == '__main__':
ternary_search(0, len(List)-1, 6)
This implementation effectively takes only two comparison per iteration. So, ignoring the time taken to compute the mid points, wouldn't it be as effective as a binary search?
Then why not take this further to a n- ary search?
(Although, then the major concern of the search would be the number of midpoint calculations rather than the number of the comparisons, though I don't know if that's the correct answer).
While both have logarithmic complexity, a ternary search will be faster than binary search for a large enough tree.
Though the binary search have 1 less comparison each node, it is deeper than Ternary search tree. For a tree of 100 million nodes, assuming both trees are properly balanced, depth of BST would be ~26, for ternary search tree it is ~16. Again, this speed-up will not be felt unless you have a very big tree.
The answer to your next question "why not take this further to a n-ary search?" is interesting. There are actually trees which take it further, for example b-tree and b+-tree. They are heavily used in database or file systems and can have 100-200 child nodes spawning from a parent.
Why? Because for these trees, you need to invoke an IO operation for every single node you access; which you are probably aware costs a lot, lot more than in memory operations your code performs. So although you'll now have to perform n-1 comparisons in each node, the cost of these in memory operations pales into comparison to the IO cost proportional to tree's depth.
As for in memory operations, remember that when you have a n-ary tree for n elements, you basically have an array, which has have O(n) search complexity, because all elements are now in same node. So, while increasing ary, at one point it stops being more efficient and starts actually harming performance.
So why do we always prefer BSt i.e 2-ary and not 3 or 4 or such? because it's lot simpler to implement. That's it really, no big mystery here.

Suboptimal solution given by A* search

I don't understand how the following graph gives a suboptimal solution with A* search.
The graph above was given as an example where A* search gives a suboptimal solution, i.e the heuristic is admissible but not consistent. Each node has a heuristic value corresponding to it and the weight of traversing a node is given. I don't understand how A* search will expand the nodes.
The heuristic h(n) is not consistent.
Let me first define when a heuristic function is said to be consistent.
h(n) is consistent if
– for every node n
– for every successor n' due to legal action a
– h(n) <= c(n,a,n') + h(n')
Here clearly 'A' is a successor to node 'B'
but h(B) > h(A) + c(A,a,B)
Therefore the heuristic function is not consistent/monotone, and so A* need not give an optimal solution.
Honestly I don't see how A* could return a sub-optimal solution with the given heuristic.
This is for a simple reason: the given heuristic is admissible (and even monotone/consistent).
h(s) <= h*(s) for each s in the graph
You can check this yourself comparing the h value in each node to the cost of shortest path to g.
Given the optimality property of A* I don't see how it could return a sub-optimal solution, which should be S -> A -> G of course.
The only way it could return a suboptimal solution is if it would stop once an action from a node in the frontier leading to the goal is found (so to have a path to the goal), but this would not be A*.
In graph search, we don't expand the already discovered nodes.
f(node) = cost to node from that path + h(node)
At first step:
fringe = {S} - explored = {}
S is discarded from the fringe and A and B are added.
fringe = {A, B} - explored = {S}
Then f(A) = 5 and f(B) = 7. So we discard A from the fringe and add G.
fringe = {G, B} - explored = {S, A}
Then f(G) = 8 and f(B) = 7 so we discard B from the fringe but don't add A since we already explored it.
fringe = {G} - explored = {S, A, B}
Finally we have only G left in the fringe So we discard G from the fringe and we have reached our goal.
The path would be S->A->G.
If this was a tree search problem, then we would find the answer S->B->A->G since we would reconsider the A node.

Performance difference RBFS - A*

I've implemented the RBFS as defined in AIMA and wanted to compare it to an implementation of A*. When I try it out on an instance from the TSPlib (ulyssis16 - 16 cities), both come up with a correct solution, the memory usage is also correct (exponential for A*, linear for RBFS). The only weird thing is that the RBFS algorithm is much faster than A* when it shouldn't be the case. A* finished in about 3.5 minutes, RBFS takes only a few seconds. When I count the number of visits for each node, A* visits a lot more nodes than RBFS. That also seems counter-intuitive.
Is this because of the trivial instance? I can't really try it out on a bigger instance as my computer doesn't have enough memory for that.
Has anyone got any suggestions? Both algorithms seem to be correct according to their specifications, their results and memory usage. Only the execution time seems off...
I already looked everywhere but couldn't find anything about a difference in their search strategy. Except for the fact that RBFS can revisit nodes.
Thanks
Jasper
Edit 1: AIMA corresponds to the book Artificial Intelligence a Modern Approach by Russel and Norvig.
Edit 2: TSPlib is a set of instances of the TSP problem (http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/)
Edit 4: The code for the A* algorithm is given as follows. This should be correct.
def graph_search(problem, fringe):
"""Search through the successors of a problem to find a goal.
The argument fringe should be an empty queue.
If two paths reach a state, only use the best one. [Fig. 3.18]"""
closed = {}
fringe.append(Node(problem.initial))
while fringe:
node = fringe.pop()
if problem.goal_test(node.state):
return node
if node.state not in closed:
closed[node.state] = True
fringe.extend(node.expand(problem))
return None
def best_first_graph_search(problem, f):
return graph_search(problem, PriorityQueue(f,min))
def astar_graph_search(problem, h):
def f(n):
return n.path_cost + h(n)
return best_first_graph_search(problem, f)
Where problem is a variable containing details about the problem to be solved (when the goal is reached, how to generate successors, the initial state,...). A node contains the path on how to reach that state by storing the parent node and some other utility functions. Here is an older version of this code http://aima-python.googlecode.com/svn/trunk/search.py) For the TSP problem, the tours are created incrementally. The used heuristic is the minimal spanning tree on the nodes that are not yet visited.
The code for RBFS is as follows:
def rbfs(problem, h):
def f(n):
return n.path_cost + h(n)
def rbfs_helper(node, bound):
#print("Current bound: ", bound)
problem.visit(node)
if (problem.goal_test(node.state)):
return [node, f(node)]
backup = {}
backup[node.state.id] = f(node)
succ = list(node.expand(problem))
if (not succ):
return [None, float("inf")]
for v in succ:
backup[v.state.id] = max(f(v), backup[node.state.id])
while(True):
sortedSucc = sorted(succ, key=lambda node: backup[node.state.id])
best = sortedSucc[0]
if (backup[best.state.id] > bound):
return [None, backup[best.state.id]]
if (len(sortedSucc) == 1):
[resultNode, backup[best.state.id]] = rbfs_helper(best, bound)
else:
alternative = sortedSucc[1]
[resultNode, backup[best.state.id]] = rbfs_helper(best, min(bound, backup[alternative.state.id]))
if (resultNode != None):
return [resultNode, backup[best.state.id]]
[node, v] = rbfs_helper(Node(problem.initial), float("inf"))
return node
It also uses the problem and node as defined above. Those were specifically designed to be used as generic elements.

Resources