Understanding A* search - search

I am having some trouble understanding how A* search could be applied to a robot traversing a maze in real time. I understand how A* works but only in "pre-computed" sense (i.e. if you were to work out the whole path before attempting to traverse the maze).
How would you use it to give an answer to "where to next" at every step of the maze? Or am I missing something? Thanks a lot!

Generally the robot will map out the maze as best it can, then run the pathfinding algorithm and follow the resulting best path. If changes to the maze are later detected, the robot will rerun A* from its current position.
There is an alteration to A*, called D*-lite, that is able to reuse past searches to speed up future searches when small changes to the maze are made. This is the algorithm the Mars Rovers use.

Related

Looking to create an efficient way to display a non-binary tree structure

I'm working on a creating a genetic algorithm for a class project in python. The algorithm works perfectly, but I want to create an image of the tree, rather than just a simple text output. I have written a function that performs well for trees up to about 4 levels. Above that, the display doesn't work well, and there ends up with too much blank space. I know why, but it might take me too long to come up with a better solution.
Does anyone know if there exists a function to create a compact non-binary tree display? I'm looking for one that adjusts each branch so there isn't a ton of blank space between them, given that a lot of the branch depths are un-even. I've found a lot of binary tree display functions, but that doesn't work because some of my nodes have 3 children.
You can see it working well and not working well in the images.
Decent looking tree:
https://drive.google.com/file/d/1j2BQjanTDgvzttedUyhxnbWhkXuQwyaG/view?usp=sharing
Not so great tree (too much blank space):
https://drive.google.com/file/d/1Gh90e3JvAeCB_U2NhvvouZulM4CVQ8ZH/view?usp=sharing
Thanks in advance.
Use graphviz.
In particular, the dot layout generator should be a good match for your needs.
You may also find the NetworkX drawing module helpful.

How to program moving a robot(e-puck) to the specific position?

I'm newbie in programming world especially in webots.
Do you guys have any ideas/tutorial to move e-puck robot to a specific position in webots?
In my case, I'm trying to move the e-puck robot to the start position and when the robot finish performing wall following behavior, it will stop at the same position as start.
I'm searching for the ideas/tutorial to solve the problem, but in the end, I'm stuck. Can anyone help me?
Thank you.
I recommend you to take a look at the e-puck curriculum for Webots, particularly at the advanced part:
http://en.wikibooks.org/wiki/Cyberbotics%27_Robot_Curriculum/Advanced_Programming_Exercises
The idea is the following: if you would like to do this in real conditions, implementing the odometry of the robot is a first step. A "goto" function based on the odometry would be the second step. The link contains information about this.
Ultimately, taking care of the other sensors (distance sensors, etc.) is the best solution, but it requires to do "SLAM". In this case, you can link your controller with a SLAM library (e.g. gmapping or Karto):
http://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping
If you don't care about realism, you can get the absolute coordinates of the robot (using a Supervisor, or a GPS+Compass device). Writing a "goto" function on this is trivial.

Assembly NASM how to create and work with a search tree without pointers

I have a problem where a guy has to go through several pillars, where they are connected by bridges with holes. The guy must choose the best way trough the pillars. The best way is the way with less holes from the beginning pillar to the last one.
Here is an image provided with the problem.
The program will receive as input a description of the number off holes beetween every pillar and the number of pillars and bridges. The guy must go in only one direction towards the last pillar (no going back).
To me it looks like a tree search problem, but I was told i shouldnt use pointers in this problem cause there is a way to organize and solve it without using classic C tree definitions in assembly (where it d be much harder), only by using recursion.
How can i organize the "way tree" without using dinamic vectors/pointers?
You can make a 'nxn'-matrix (an array of length n^2), where matrix[i, j] is the quantity of holes between them.
If there's no path between the nodes i and j, the quantity of holes is infinity (2^31-1).
Then you can find the best path using recursion or just using the Dijkstra algorithm.

Non-optimal solutions for 15-puzzle game

I am applying A* (and IDA*) search with manhattan heuristic for finding solution to 15-puzzle problem.
Using the fact that i dont want an optimal solution for the problem how can i can speed up the search as the current routine is too slow.
Well, it's not exactly a solution, but it might help. Once I've been working for a HOG game with that same puzzle as a minigame and it turned much easier to generate a problem, than find a solution.
What I mean is, we can turn a solved puzzle into unsolved by randomly moving "window" according to rules. And logging each field position for future use. Then we let user play a bit and if she gives up, we can solve a puzzle for her easily by finding the common position in user and our log. We just play back via user log to the common position and back from it to the solved position via ours log.
Of course, this is a hack and not a real solution, but it works fine in gamedev. And not only for this particular game. Most repositioning puzzles can be "solved" this way.

How does pathfinding in RTS video games work?

In a game such as Warcraft 3 or Age of Empires, the ways that an AI opponent can move about the map seem almost limitless. The maps are huge and the position of other players is constantly changing.
How does the AI path-finding in games like these work? Standard graph-search methods (such as DFS, BFS or A*) seem impossible in such a setup.
Take the following with a grain of salt, since I don't have first-person experience with pathfinding.
That being said, there are likely to be different approaches, but I think standard graph-search methods, notably (variants of) A* are perfectly reasonable for strategy games. Most strategy games I know seem to be based on a tile system, where the map is comprised of little squares, which are easily mapped to a graph. One example would be StarCraft II (Screenshot), which I'll keep using as an example in the remainder of this answer, because I'm most familiar with it.
While A* can be used for real-time strategy games, there are a few drawbacks that have to be overcome by tweaks to the core algorithm:
A* is too slow
Since an RTS is by definion "real time", waiting for the computation to finish will frustrate the player, because the units will lag. This can be remedied in several ways. One is to use Multi-tiered A*, which computes a rough course before taking smaller obstacles into account. Another obvious optimization is to group units heading to the same destination into a platoon and only calculate one path for all of them.
Instead of the naive approach of making every single tile a node in the graph, one could also build a navigation mesh, which has fewer nodes and could be searched faster – this requires tweaking the search algorithm a little, but it would still be A* at the core.
A* is static
A* works on a static graph, so what to do when the landscape changes? I don't know how this is done in actual games, but I imagine the pathing is done repeatedly to cope with new obstacles or removed obstacles. Maybe they are using an incremental version of A* (PDF).
To see a demonstration of StarCraft II coping with this, go to 7:50 in this video.
A* has perfect information
A part of many RTS games is unexplored terrain. Since you can't see the terrain, your units shouldn't know where to walk either, but often they do anyway. One approach is to penalize walking through unexplored terrain, so units are more reluctant to take advantage of their omniscience, another is to take the omniscience away and just assume unexplored terrain is walkable. This can result in the units stumbling into dead ends, sometimes ones that are obvious to the player, until they finally explore a path to the target.
Fog of War is another aspect of this. For example, in StarCraft 2 there are destructible obstacles on the map. It has been shown that you can order a unit to move to the enemy base, and it will start down a different path if the obstacle has already been destroyed by your opponent, thus giving you information you should not actually have.
To summarize: You can use standard algorithms, but you may have to use them cleverly. And as a last bonus: I have found Amit’s Game Programming Information interesting with regard to pathing. It also has links to further discussion of the problem.
This is a bit of a simple example, but it shows that you can make the illusion of AI / Indepth Pathfinding from a non-complex set of rules: Pac-Man Pathfinding
Essentially, it is possible for the AI to know local (near by) information and make decisions based on that knowledge.
A* is a common pathfinding algorithm. This is a popular game development topic - you should be able to find numerous books and websites that contain information.
Check out visibility graphs. I believe that is what they use for path finding.

Resources