Shortest Path

September 7, 2011

Jump Point Search

Filed under: pathfinding — dharabor @ 16:32
Tags: , ,
This is the final article in my three-part look at symmetry reduction algorithms for speeding up pathfinding on uniform-cost grid maps. Recapping the story so far:

  • Part one introduces the notion of path symmetry: a property of uniform-cost grid maps which can significantly slow down search.
  • Part two discusses Rectangular Symmetry Reduction (RSR) [2]: a simple yet effective preprocessing algorithm that eliminates many path symmetries by decomposing a grid map into a set of empty rectangles.

In this article I describe Jump Point Search (JPS) [3]: an online symmetry breaking algorithm which speeds up pathfinding on uniform-cost grid maps by “jumping over” many locations that would otherwise need to be explicitly considered. JPS is faster and more powerful than RSR: it can consistently speed up A* search by over an order of magnitude and more. Unlike other similar algorithms JPS requires no preprocessing and has no memory overheads. Further, it is easily combined with most existing speedup techniques — including abstraction and memory heuristics.

Please note: this work was developed in conjunction with Alban Grastien. It forms part of my doctoral research into pathfinding at NICTA and The Australian National University. Electronic copies of the paper in which this idea was first described are available from my homepage.

The Algorithm

This section and the next describe the mechanical details and algorithmic properties of Jump Point Search. If you don’t care for such things, feel free to jump ahead and check out some screenshots.

JPS [3] can be described in terms of two simple pruning rules which are applied recursively during search: one rule is specific to straight steps, the other for diagonal steps. The key intuition in both cases is to prune the set of immediate neighbours around a node by trying to prove that an optimal path (symmetric or otherwise) exists from the parent of the current node to each neighbour and that path does not involve visiting the current node. Figure 1 outlines the basic idea.

Figure 1: Neighbour Pruning

Node x is currently being expanded. The arrow indicates the direction of travel from its parent, either straight or diagonal. In both cases we can immediately prune all grey neighbours as these can be reached optimally from the parent of x without ever going through node x.

We will refer to the set of nodes that remain after pruning as the natural neighbours of the current node. These are marked white in Figure 1. Ideally, we only ever want to consider the set of natural neighbours during expansion. However, in some cases, the presence of obstacles may mean that we need to also consider a small set of up to k additional nodes (0 ≤ k ≤ 2). We say that these nodes are forced neighbours of the current node. Figure 2 gives an overview of this idea.

Figure 2: Forced Neighbours

Node x is currently being expanded. The arrow indicates the direction of travel from its parent, either straight or diagonal. Notice that when x is adjacent to an obstacle the highlighted neighbours cannot be pruned; any alternative optimal path, from the parent of x to each of these nodes, is blocked.

We apply these pruning rules during search as follows: instead of generating each natural and forced neighbour we instead recursively prune the set of neighbours around each such node. Intuitively, the objective is to eliminate symmetries by recursively “jumping over” all nodes which can be reached optimally by a path that does not visit the current node. We stop the recursion when we hit an obstacle or when we find a so-called jump point successor. Jump points are interesting because they have neighbours that cannot be reached by an alternative symmetric path: the optimal path must go through the current node.

The details of the recursive pruning algorithm are reasonably straightforward: to ensure optimality we need only assign an ordering to how we process natural neighbours (straight steps before diagonal). I will not attempt to outline it further here; the full details are in the paper and my aim is only to provide a flavour for the work. Figures 3 gives two examples of the pruning algorithm in action. In the first case we identify a jump point by recursing straight; in the second case we identify a jump point by recursing diagonally.


(a) Jumping Straight

(b) Jumping Diagonally
Figure 3: Jumping Examples. Node x is currently being expanded. p(x) is its parent.

(a) We recursively apply the straight pruning rule and identify y as a jump point successor of x. This node is interesting because it has a neighbour z that cannot be reached optimally except by a path that visits x then y. The intermediate nodes are never explicitly generated or even evaluated.

(b) We recursively apply the diagonal pruning rule and identify y as a jump point successor of x. Notice that before each diagonal step we first recurse straight (dashed lines). Only if both straight recursions fail to identify a jump point do we step diagonally again. Node w, which is simply a forced neighbour of x, is generated as normal.

Properties and Performance

Jump Point Search is nice for a number of reasons:

  1. It is optimal.
  2. It involves no pre-processing.
  3. It requires no extra-memory overheads.
  4. It can consistently speed up A* search by over 10 times; making it not only competitive with, but often better than, approximate techniques such as HPA* [1].

Properties 1-3 are interesting theoretical results, and rather surprising, but I will not address them further here. My main objective in this article is simply provide a flavour for the work; for a full discussion, including proofs, please see the original paper [3]. Property 4 is perhaps of broadest practical interest so I will give a short summary of our findings below.

We evaluated JPS on four map sets taken from Nathan Sturtevant’s freely available pathfinding library, Hierarchical Open Graph. Two of the benchmarks are realistic, originating from popular BioWare video games Baldur’s Gate II: Shadows of Amn and Dragon Age: Origins. The other two Adaptive Depth and Rooms are synthetic though the former could be described as semi-realistic. In each case we measured the relative speedup of A* + JPS vs. A* alone.

Briefly: JPS can speed up A* by a factor of between 3-15 times (Adaptive Depth), 2-30 times (Baldur’s Gate), 3-26 times (Dragon Age) and 3-16 times (Rooms). In each case the lower figure represents average performance for short pathfinding problems and the higher figure for long pathfinding problems (i.e. the longer the optimal path to be found, the more benefit is derived from applying JPS).

What makes these results even more compelling is that in 3 of the 4 benchmarks A* + JPS was able to consistently outperform the well known HPA* algorithm [1]. This is remarkable as A* + JPS is always performing optimal search while HPA* is only performing approximate search. On the remaining benchmark, Dragon Age, we found there was very little to differentiate the performance of the two algorithms.

Caveat emptor: It is important to highlight at this stage that A* + JPS only achieves these kids of speedups because each benchmark problem set contains a large number of symmetric path segments (usually manifested in the form of large open areas on the map). In such cases JPS can exploit the symmetry and ignore large parts of the search space. This means A* both generates and expands a much smaller number of nodes and consequently reaches the goal much sooner. When there is very little symmetry to exploit however we expect that our gains will be more modest.

Screenshots

Below in Figure 4 are screenshots of A* + JPS in action. In each case tiles marked red must be explored in order to find the optimal solution (marked in blue). For comparison, I also show the number of tiles explored by A* + RSR and vanilla A* when solving the same problem instances.
(Click on an image to view a larger version.)



(a) A* (Baldur’s Gate)


(b) A* + RSR (Baldur’s Gate)


(c) A* + JPS (Baldur’s Gate)



(d) A* (Adaptive Depth)


(e) A* + RSR (Adaptive Depth)


(f) A* + JPS (Adaptive Depth)



(g) A* (Rooms)


(h) A* + RSR (Rooms)


(i) A* + JPS (Rooms)
Figure 4: Search Effort. Comparative pathfinding examples from our experimental results. Images in the first column show total nodes expanded by A* (marked red) in order to find the optimal path to the goal (marked blue). Images in the middle and last columns are total nodes expanded by A* + RSR and A* + JPS respectively. Notice that A* + JPS ignores many symmetric path segments (more than A* + RSR even) and typically reaches the goal with much less effort.

Final Thoughts

The explicit identification and elimination of symmetries in pathfinding domains is an idea that until now has received little attention in the academic literature. Approaches for dealing with symmetry, such as Jump Point Search, provide us with powerful new tools for reducing the size of explicit regular search spaces. By eliminating symmetry we speed up not just A* but entire classes of similar pathfinding algorithms. Also, consider: JPS is entirely orthogonal to almost every other speedup technique applicable to grid maps. Thus, there is no reason why we couldn’t combine it, or other similar methods, with hierarchical pathfinding approaches, memory heuristics or even other optimality-preserving state-space reduction techniques. That means the results presented thus far are only the tip of the iceberg in terms of performant grid-based pathfinding methods.

Another exciting aspect of this work is the possibilities it opens for further research. For example: could we pre-process the map and go even faster? Or: are there analogous jumping rules that one could develop for weighted grids? What about other domains? Could we apply the lessons learned thus far to help solve other interesting search problems? The answers to the first two questions already appear to be positive; the third is something I want to explore in the near future. Regardless of how it all turns out, one thing is certain: it’s an exciting time to be working on pathfinding!

References

[1] A. Botea, M. Müller, and J. Schaeffer. Near Optimal Hierarchical Path-finding. In Journal of Game Development (Issue 1, Volume 1), 2004.

[2] D. Harabor, A. Botea, and P. Kilby. Path Symmetries in Uniform-cost Grid Maps. In Symposium on Abstraction Reformulation and Approximation (SARA), 2011.

[3] D. Harabor and A. Grastien. Online Graph Pruning for Pathfinding on Grid Maps. In National Conference on Artificial Intelligence (AAAI), 2011.

Advertisement

September 1, 2011

Rectangular Symmetry Reduction

Filed under: pathfinding — dharabor @ 13:57
Tags: , ,
This is the second of my three-part look into symmetry reduction algorithms for speeding up grid-based pathfinding.

In part one I introduced the notion of path symmetry: a property of uniform-cost grid maps which can significantly slow down optimal search. In this article I discuss Rectangular Symmetry Reduction (RSR for short): a new pre-processing algorithm that explicitly identifies and eliminates symmetries from uniform cost grid maps. RSR [1] [2] is simple to understand, quick to apply and has low memory overheads. When combined with a standard search algorithm, such as A*, RSR can speed up optimal pathfinding by anywhere from several factors to an order of magnitude. These characteristics make RSR highly competitive with, and in many cases better than, competing state-of-the-art search space reduction techniques.

Please note: this work was developed in conjunction with Adi Botea and Philip Kilby. It forms part of my doctoral research into pathfinding at NICTA and The Australian National University. Electronic copies of the papers in which these ideas were first described are available from my homepage.

The Algorithm

RSR speeds up optimal pathfinding by decomposing an arbitrary uniform-cost grid map into a set of empty rectangles. The idea is to avoid symmetries during search by only ever expanding nodes from the perimeter of each empty rectangle, and never from the interior. To ensure optimal travel through each rectangle we will also add a series of macro edges that allow units to “jump” from one side of a rectangle’s perimeter to the directly opposite side.

(This remainder of this section, and the next, give a mechanical overview of the algorithm and its properties. If you’re impatient, or don’t care about such things, you can skip ahead and check out some screenshots.)

RSR can be described in 3 simple steps. Steps 1 and 2 are applied offline; their objective is to identify and prune symmetry from the original grid map. The third step is an online node insertion procedure; its objective is to preserve optimality when searching for paths in the symmetry-reduced grid map.

RSR Step 1: Grid Decomposition

Decompose the grid map into a set of obstacle-free rectangles. The size of the rectangles can vary across a map, depending on the placement of the obstacles. Once the decomposition is complete, prune all nodes from the interior, but not the perimeter, of each empty rectangle.
RSR Step 2: Addition of Macro Edges

Add a series of macro edges that connect each node on the perimeter of a rectangle with other nodes from the perimeter. In a 4-connected map (shown here for simplicity) a single macro edge between nodes on opposite sides of each rectangle will suffice. If diagonal moves are allowed, a set of macro edges (as described in [1]) will be needed instead.
RSR Step 3: Online Insertion

When the start or goal is located in the interior of an empty rectangle, we use a temporary node re-insertion procedure. In a 4-connected map (shown here for simplicity) we connect the temporary node, online, to the 4 nearest perimeter nodes. A similar operation, involving sets of edges from each perimeter side, is used when diagonal moves are allowed.

Properties and Performance

RSR has several attractive properties:

  1. It preserves optimality.
  2. It has a small memory overhead in practice.
  3. Node insertion (Step 3) can be performed in constant time.
  4. It can speed up A* search by anywhere from several factors to an order of magnitude.

I will focus on points 2 and 4 in the remainder of this section. A thorough discussion of points 1 and 3 (including proofs) can be found in [1].

Memory Requirements:
In the most straightforward implement of RSR we need to store the id of the parent rectangle for each of the n traversable nodes in the original grid. We also need to store the origin and dimensions of each rectangle (macro edges can be identified on-the-fly and do not need to be stored explicitly). This means that, in the worst case, we might need to store up to 5n integers. In practice however we can usually do much better. For example: there is little benefit in storing or assigning nodes to rectangles with few or no interior nodes (1×1, 2×1, 2×2, 3×2, 3×3 etc.). We can also avoid the parent id overhead altogether and only store the set of identified rectangles. The only downside is that, during insertion (Step 3 above), we now need to search for the parent rectangle of the start and goal — instead of being able to identify it in constant time.

Performance:
We evaluated RSR on a number of benchmark grid map sets taken from Nathan Sturtevant’s freely available pathfinding library, Hierarchical Open Graph. One of the map sets is from the game Baldur’s Gate II: Shadows of Amn. The other two map sets (Rooms and Adaptive Depth) are both synthetic, though the latter could be described as semi-realistic. I will give only a short summary of our findings; full results and discussion is available in the following papers: [1] [2].

In short: we observed that in most cases RSR can consistently speed up A* search by a factor of between 2-3 times (Baldur’s Gate), 2-5 times (Adaptive Depth) and 5-9 times (Rooms). In some cases the gain can be much higher: up to 30 times.

We found that the extent of the speed gains will be dependent on the topography of the underlying grid map. On maps that feature large open areas or which can be naturally decomposed into rectangular regions, RSR is highly effective. When these conditions do not exist, the observed speed gains are more modest. This performance is competitive with, and often better than, Swamps [4]; another state-of-the-art search space reduction technique. We did not identify a clear winner (each algorithm has different strengths) but did notice that the two methods are orthogonal and could be easily combined.

Screenshots

Below are screenshots of A* + RSR in action. In each case tiles marked red must be explored in order to find the optimal solution (marked in blue). For comparison, I also show the number of tiles explored by vanilla A* when solving the same problem instances. I give one example problem from each benchmark set. You can click on each image to make it larger.


(a) A* (Baldur’s Gate)


(b) A* (Adaptive Depth)


(c) A* (Rooms)



(d) A* + RSR (Baldur’s Gate)


(e) A* + RSR (Adaptive Depth)


(f) A* + RSR (Rooms)
Figure 2: Search Effort.
Comparative pathfinding examples from our experimental results. Images (a) to (c) are total nodes expanded by A* in order to find the optimal path to the goal (marked blue). Images (d) to (f) are total nodes expanded by A* + RSR. The respective domains are (from left to right): Baldur’s Gate, Adaptive Depth and Rooms. Notice that A* + RSR ignores many symmetric path segments and typically reaches the goal with much less effort.

Next Time

In the final part of this series I will discuss Jump Point Search [3]: a similar-yet-different symmetry breaking technique which builds on some of the ideas introduced here. Like RSR, Jump Point Search is highly effective and simple to apply; unlike RSR it can be applied online and has no memory overhead and no pre-processing requirements. In most cases, JPS is also faster than RSR. Stay tuned for that!

Continue to Part 3.

References

[1] D. Harabor, A. Botea, and P. Kilby. Path Symmetries in Uniform-cost Grid Maps. In Symposium on Abstraction Reformulation and Approximation (SARA), 2011.

[2] D. Harabor and A. Botea. Breaking Path Symmetries in 4-connected Grid Maps. In AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2010.

[3] D. Harabor and A. Grastien. Online Graph Pruning for Pathfinding on Grid Maps. In National Conference on Artificial Intelligence (AAAI), 2011.

[4] N. Pochter, A. Zohar, J. S. Rosenschein and A. Felner. Search Space Reduction Using Swamp Hierarchies. In National Conference on Artificial Intelligence (AAAI), 2010.

August 26, 2011

Fast Pathfinding via Symmetry Breaking

Filed under: pathfinding — dharabor @ 15:52
Tags: , ,
In this article, the first in a three-part series, I attempt to explain path symmetry: a property of uniform-cost grid maps and other regular search domains which can significantly slow down pathfinding. I describe how symmetry manifests itself and briefly discuss some approaches that other people have tried to improve things. Finally, I introduce two recent ideas from my doctoral research: Rectangular Symmetry Reduction (RSR) and Jump Point Search (JPS). Both are optimality-preserving techniques that explicitly identify and eliminate symmetry in order to speed up pathfinding by an order of magnitude and more.

In parts two and three I will give a technical overview of both RSR and JPS, discussing their strengths, weaknesses and observed performance on two popular video-game benchmarks kindly made available by Nathan Sturtevant and BioWare: Baldur’s Gate II and Dragon Age: Origins.

Please note: this work was developed in conjunction with my co-authors: Adi Botea, Alban Grastien and Philip Kilby. It forms part of my doctoral research into pathfinding at NICTA and The Australian National University. Electronic copies of the papers mentioned in this article are available from my homepage.

Path Symmetries

Often appearing as a pathfinding domain in areas such as robotics and video games, grid maps are a simple yet popular method for representing an environment. As it turns out, grids are also academically interesting for the following reason: between any given pair of locations, there usually exists many possible paths. Sometimes these paths represent alternative ways of reaching one location from the other but more often they are symmetric in the sense that the only difference between them is the order in which moves appear.

Figure 1: Symmetry.
A simple grid-based pathfinding problem. For simplicity (but not in general) we allow only straight moves and not diagonal. Many optimal length paths could be returned as the solution; we highlight only a few. Notice that each one is a permutation of all the others. In such cases we say that a symmetry exists between the paths.

Before proceeding further it is necessary to establish some precise terminology. For example: what exactly is a path? Traditionally, Computer Scientists define paths as ordered sequences of connected edges. The conjunction of these edges represents a walk in the search graph from some arbitrary start location to some arbitrary goal location. However this definition is too general to capture the idea of symmetry in grid maps. For that, we need a slightly different notion of a path:

Definition 1: Grid Path.
A path in a grid map is an ordered sequence of vectors, where each vector represents a single step from one node on the grid to an adjacent neighbouring node.

The distinction between edges and vectors is an important one as it allows us to distinguish between paths that are merely equivalent (i.e. of the same length) and those which are symmetric. But what, exactly, does it mean to have a symmetric relationship between paths?

Definition 2: Path Symmetry.
Two grid paths are symmetric if they share the same start and end point and one can be derived from the other by swapping the order of the constituent vectors.

As a clarifying example, consider the problem instance in Figure 1. Each highlighted path is symmetric to the others since they all have the same length and they all involve some permutation of 9 steps up and 9 steps right.

In the presence of symmetry a search algorithm such as A* is forced to explore virtually every location along each optimal path. In Figure 1, depending on how we break ties, A* might expand every node on the entire map before reaching the goal. Further, A* will usually consider many other nodes that appear promising but that are not on any optimal path. For example, each time A* expands a node from the fringe of the search, it has already likely found almost every symmetric shortest path leading to that node (again, depending on how we break ties when two nodes appear equally good). But this effort is in vain if these expansions do not lead the search closer to the goal and instead into a dead-end. Thus arises the question which I will attempt to answer in this article: how do we deal with symmetry when pathfinding on grid maps?

Existing Methods For Speeding Up Search

A large number of techniques have been proposed to speed up pathfinding. Most can be classified as variations on three themes:

  1. Reducing the size of the search space through abstraction.

    Algorithms of this type are fast and use little memory but compute paths which are usually not optimal and must be refined via further search. Typical examples: HPA* [2], MM-Abstraction [9].

  2. Improving the accurary of heuristic functions that guide search.

    Algorithms of this type usually pre-compute and store distances between key pairs of locations in the environment. Though fast and optimal, such methods can incur signficant memory overheads which is often undesirable. Typical examples: Landmarks [6], True Distance Heuristics [10].

  3. Dead-end detection and other state-space pruning methods.

    Algorithms of this type usually aim to identify areas on the map that do not need to be explored in order to reach the goal optimally. Though not as fast as abstraction or memory heuristics, such methods usually have low memory requirements and can improve pathfinding performance by several factors. Typical examples: Dead-end Heuristic [1], Swamps [8], Portal Heuristic [7].

My work in this area can be broadly classified as a search space pruning technique. Where it differs from existing efforts is that, instead of trying to identify areas that do not have to be crossed during search, I aim to identify and prune symmetric nodes that prevent the fast exploration of an area. This idea nicely complements existing search-space reduction techniques and, as it turns out, also complements most grid-based abstraction methods and memory heuristic approaches.

Rectangular Symmetry Reduction and Jump Point Search

My co-authors and I have developed two distinct approaches for explicltly identifying and eliminating path symmetries on grid maps. I outline them here very briefly and in more detail in an upcoming post:

  1. Rectangular Symmetry Reduction (RSR).

    RSR [4][5] can be described as a pre-processing algorithm which identifies symmetries by decomposing the grid map into empty rectangular regions. The central idea, then, is to avoid symmetries by only ever expanding nodes from the perimeter of each rectangle and never from the interior.

  2. Jump Point Search (JPS).

    JPS [3] consists of two simple neighbour pruning rules that are applied recursively during search. Each rule considers the direction of travel to a given node from its parent (either a straight step or a diagonal step) in order to prune the set of local neighbours (tiles) around that node. The central idea is to avoid any neighbours that could be reached optimally from the parent of any given node. In this way we are able to identify and avoid, on-the-fly, large sets of symmetric path segments; such as those in Figure 1.

RSR can pre-process most maps in under a second and has a very small memory overhead: we need to keep only the id of the parent rectangle each node is associated with (and the origin and dimensions of each such rectangle). By comparison, JPS has no pre-processing requirements and no memory overheads. Both algorithms are optimal and both can speed up A* by an order of magnitude and more. Figure 2 shows a typical example; For reference, I also include a screenshot for vanilla A*. You can click on each image to make it larger.



(a) A*


(b) A* + RSR


(c) A* + JPS
Figure 2: Search Effort.
All nodes marked red must be expanded before the optimal path to the goal (marked in blue) is found. Notice that, in the case of A*, many nodes on the fringe of the search (i.e. on the edge of the red area) can only be reached by paths that cross large regions of empty space. These paths usually have many symmetric alternatives and considering them all can require a substantial number of node expansion operations. RSR and JPS can detect and eliminate such symmetries and both reach the goal much sooner.

Next time I will explain, in more depth, the mechanical details of both RSR and JPS. In the meantime, electronic copies of the research papers in which RSR [4][5] and JPS [3] were originally described are available from my homepage.

If you have any questions or comments on this work, I’d be glad to hear from you. Please leave a message below or, if you prefer, drop me an email instead (my contact details are on the About page).

Continue to Part 2.
Continue to Part 3.

References

[1] Y. Björnsson and K Halldörsson Improved Heuristics for Optimal Path-finding on Game Maps. In AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2006.

[2] A. Botea, M. Müller, and J. Schaeffer. Near Optimal Hierarchical Path-finding. In Journal of Game Development (Issue 1, Volume 1), 2004.

[3] D. Harabor and A. Grastien. Online Graph Pruning for Pathfinding on Grid Maps. In National Conference on Artificial Intelligence (AAAI), 2011.

[4] D. Harabor, A. Botea, and P. Kilby. Path Symmetries in Uniform-cost Grid Maps. In Symposium on Abstraction Reformulation and Approximation (SARA), 2011.

[5] D. Harabor and A. Botea. Breaking Path Symmetries in 4-connected Grid Maps. In AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2010.

[6] A. V. Goldberg and C. Harrelson. Computing The Shortest Path: A* Search Meets Graph Theory. In SIAM Symposium on Discrete Algorithms (SODA), 2005.

[7] M. Goldenberg, A. Felner, N. Sturtevant and J. Schaeffer. Portal-Based True-Distance Heuristics for Path Finding. In Symposium on Combinatorial Search (SoCS), 2010.

[8] N. Pochter, A. Zohar, J. S. Rosenschein and A. Felner.
Search Space Reduction Using Swamp Hierarchies. In National Conference on Artificial Intelligence (AAAI), 2010.

[9] N. R. Sturtevant. Memory-Efficient Abstractions for Pathfinding. In AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2007.

[10] N. R. Sturtevant, A. Felner, M. Barrer, J. Schaeffer and N. Burch. Memory-Based Heuristics for Explicit State Spaces. In International Joint Conference on Artificial Intelligence (IJCAI), 2009.

February 12, 2009

Clearance-based Pathfinding with Probabilistic Roadmaps

Filed under: pathfinding — dharabor @ 00:06
Tags:

In my last two articles I discussed HAA*, a clearance-based pathfinding technique applicable to grid maps that efficiently computes paths for agents of different sizes and with distinct terrain traversal capabilities. Such problems are common in many modern video games and particularly in the real-time strategy and role-playing genres (Figure 1). The basic idea behind HAA* is really simple: for each tile in the grid, compute a clearance value to represent the maximal amount of free space at that location. By comparing the size of an arbitrary agent with a tile’s clearance value one can quickly determine which parts of the environment are traversable and which are not.

Figure 1. EA's Red Alert 3 features units with different shapes, sizes and terrain traversal capabilities. Units can be land-based, aquatic or amphibious.

Figure 1. EA's Red Alert 3 features units with different shapes, sizes and terrain traversal capabilities. Units can be land-based, aquatic or amphibious.

But what happens when you can’t (or don’t want to) work with grids? More specifically, how do you deal with agent diversity in continuous environments? As it turns out this problem can be solved with a well known method from the field of robotics: the Probabilistic Roadmap (or PRM for short). PRMs are highly suited to capturing topographical information in complex multi-dimensional environments and have been effectively used for robot pathfinding since their inception in the mid 90s. 

In this article I want to describe a recent PRM method which partly inspired my work on HAA*; it’s based on a paper by [Niuwenhuisen, Kamphuis, Mooijekind and Overmars] that was presented at CGAIDE’04. This is a really neat piece of research which makes several contributions:

  • It shows how to combine PRM construction with Voronoi diagrams. The effect is a roadmap in which every node maximises its distance from nearby obstacles. 
  • It addresses the “ugly path” problem that many PRM methods suffer from by describing a nice method for producing smooth transitions between graph nodes.
  • It describes how to combine PRMs with potential fields to achieve coherent group movement.

Since the scope is quite broad and my interest rather more narrow, I will only discuss the first aspect. If you’d like to know more, I recommend taking a look at the paper itself; it’s very well written and highly readable.

Point Sampling With Voronoi Diagrams

Traditional roadmap construction methods usually sample an environment by selecting a point and checking if it is collision-free for (some configuration of) a robot (Figure 2). The resultant PRM is thus only applicable to the robot for which is it was created (or one smaller in size). If we try to re-use the PRM for a larger robot there is no guarantee that any computed path will be valid (Figure 3). 

Figure 2. Random samples are taken from the environment. Only the points which are collision-free for some agent (shape) are kept.

Figure 2. Random samples are taken from the environment. Only the points which are collision-free for some agent (shape) are kept.

 

Figure 3. A PRM constructed for a small agent. Trying to use the same PRM for a large tank agent results in an invalid path due to insufficient clearance in the red-highlighted area..

Figure 3. A PRM constructed for a small agent. Trying to use the same PRM for a large tank agent results in an invalid path due to insufficient clearance in the red-highlighted area.

Figure 4. A simple map is divided into 6 Voronoi regions. Black areas are obstacles (which include the four map edges).

Figure 4. A simple map is divided into 6 Voronoi regions. Black areas are obstacles (which include the four map edges).

To solve this problem the authors propose a PRM construction method that uses the Voronoi diagram of the map as a guide. A Voronoi diagram is simply a division of a geometric space into regions in which every location is either closest to one particular obstacle or otherwise equidistant to several different obstacles (Figure 4).

The PRM construction method is very straightforward: 

  1. Using random sampling select non-obstacle points in the environment and retract them to the edges of the nearby Voronoi regions. These form the nodes of the new PRM.
  2. Connect all nodes to their nearest neighbours (for some value of “near”).
  3. Retract any edges which are too close to nearby obstacles (so agents don’t graze the sides of obstacles).
  4. Improve the roadmap by applying circular blends to smooth any jagged transitions between nodes (this could also be done on-the-fly for each agent rather than as a preprocessing step).
Figure 5. An example PRM produced by the method. Notice that all nodes (pink) are positioned to maximise distance from obstacles and all edges maintain a mimimum obstacle distance also.

Figure 5. An example PRM produced by the method. Notice that all nodes (pink) are positioned to maximise distance from obstacles and all edges maintain a mimimum obstacle distance also.

Figure 5 illustrates a typical result. Notice that the roadmap always retains maximum clearance from all obstacles at all times. Each node on the PRM (or backbone path) is annotated with a clearance value that represents the radius of a maximal bounding sphere centred at that position. Because the nodes on the roadmap are close together they form an overlapping clearance corridor which can be used to answer pathfinding queries for (circular shaped) agents of arbitrary size. Pretty cool, huh?

Disadvantages of Probabilistic Roadmaps

Although I quite like PRMs there are some limitations to be aware of. Perhaps the biggest is that all PRMs suffer a common ailment: representational incompleteness. Simply put, if a path exists in the original map, there is no guarantee that it will be found using a PRM. Actually, to be more precise, PRMs are only probabilistically complete, which is a fancy way of saying that as the sampling coverage of the terrain approaches 100% so too does the probability that the PRM can be used to find any valid path present in the original environment. Despite this property, PRMs have been shown to work quite well in practice, solving a large number of problems in very complex environments. The key to building a good PRM is the sampling strategy; a sufficient number of points have to be selected to reasonably approximate the original environment. How much is “sufficient” will depend on the situation; there are no hard-and-fast rules.

Another disadvantage of PRMs is that they only represent the terrain traversal requirements for one capability (i.e. all areas are either traversable by all agents or obstacles for all agents). This is in stark contrast to other pathfinding approaches, like HAA*, which build abstractions that reflect traversal requirements for all possible capabilities. One way to solve this problem is to simply build several PRMs, one for each distinct terrain traversal capability.

So that about wraps it up for today. As always, please feel free to contact me with any questions, comments or corrections you might have regarding this article.

References

Dennis Nieuwenhuisen, Arno Kamphuis, Marlies Mooijekind, Mark H. Overmars,
Automatic Construction of Roadmaps for Path Planning in Games,
International Conference on Computer Games: Artificial Intelligence, Design and Education (2004) 
pages 285-292
(pdf)

Harabor D. & Botea, A.,
Hierarchical path planning for multi-size agents in heterogeneous environments,
IEEE Symposium on Computational Intelligence and Games (2008),
(pdf) (presentation)

January 29, 2009

Clearance-based Pathfinding

Filed under: pathfinding — dharabor @ 23:50
Tags:

Real-time strategy (RTS) games often feature a wide number unit types for the player to control. One of my favourite titles from the past, Westwood’s seminal Red Alert, had many classes of differently sized units: small infantry soldiers, medium-sized Jeeps and large tanks. In Red Alert 3, the most recent incarnation of the series, the diversity is increased even further by the introduction of units with terrain-specific movement capabilities; for example, new amphibious units can move across both ground and water areas. From a pathfinding perspective this introduces an interesting question: how can we efficiently search for valid routes for variable-sized agents in rich environments with many types of terrain?

This was the topic of a recent paper which I presented at CIG’08. In the talk I outlined Hierarchical Annotated A* (HAA*), a path planner which is able to efficiently address this problem by first analysing the terrain of a game map and then building a much smaller approximate representation that captures the essential topographical features of the original. In this article (the first in a series of two) I’d like to talk about the terrain analysis aspects of HAA* in some detail and outline how one can automatically extract topological information from a map in order to solve the variable-sized multi-terrain pathfinding problem. Next time, I’ll explain how HAA* is able to use this information to build space-efficient abstractions which allow an agent to very quickly find a high quality path through a static environment.

Clearance Values and the Brushfire Algorithm

Simply put, a clearance value is a distance-to-obstacle metric which is concerned with the amount of traversable space at a discrete point in the environment. Clearance values are useful because they allow us to quickly determine which areas of a map are traversable by an agent of some arbitrary size. The idea of measuring distances to obstacles in pathfinding applications is not a new one. The Brushfire algorithm for instance is particularly well known to robotics researchers (though for different reasons than those motivating this article). This simple method, which is applicable to grid worlds, proceeds as so:

First, to each traversable tile immediately adjacent to a static obstacle in the environment (for example, a large rock or along the perimeter of a building) assign a value of 1. Next, each traversable neighbour of an assigned tile is itself assigned a value of 2 (unless a smaller value has already been assigned). The process continues in this fashion until all tiles have been considered; Figure 1 highlights this idea; here white tiles are traversable and black tiles represent obstacles (NB: The original algorithm actually assigns tiles adjacent to obstacles a value of 2; I use 1 here because it makes more sense for our purposes).

Figure 2. Caption caption caption.

Figure 1. A small toy map annotated with values computed by the Brushfire algorithm. Black tiles are not traversable.

Figure 2. Caption caption caption.

Figure 2. A 1x1 agent finds a path to the goal. All traversable tiles are included in the search space.

Figure 3.

Figure 3. A 2x2 agent finds a path to the goal. Only tiles with clearance > 1 are considered.

Figure 4. A 2x2 agent fails to find a path due to incorrect clearance values near the goal. Notice that the highlighted tiles are infact traversable by this agent.

Figure 4. A 2x2 agent fails to find a path due to incorrect clearance values near the goal. Notice that the highlighted tiles are infact traversable by this agent.

Brushfire makes use of a minimum obstacle distance metric to compute clearances which works reasonably well in many situations. If we assume our agents  (I use the term agent and unit interchangeably) are surrounded by a square bounding box we can immediately use the computed values to identify traversable locations from non-traversable ones by comparing the size of an agent with the corresponding clearance of particular tile. Figure 2 shows the search space for an agent of size 1×1; in this case, any tile with clearance equal to at least 1 is traversable. Similarly, in Figure 3 the search space is shown for an agent of size 2×2. Notice that the bigger agent can occupy much fewer locations on the map; only tiles with clearance values at least equal to 2. Because this approach is so simple it has seen widespread use, even in video games. At GDC 2007 Chris Jurney (from Relic Entertainment) described a pathfinding system for dealing with variable-sized agents in Company of Heroes — which happens to make use of a variant of the Brushfire algorithm.

Unfortunately, clearances computed using minimum obstacle distance do not accurately represent the amount of traversable space at each tile. Consider the example in Figure 4; Here our 2×2 agent incorrectly fails to find a path because all tiles in the bottleneck region are assigned clearance values less than 2. To overcome this limitation we will focus on an alternative obstacle-distance metric: true clearance.

The True Clearance Metric

The process of measuring true clearance for a given map tile is very straightforward: Surround each tile with a clearance square (bounding box really) of size 1×1. If the tile is traversable, assign it an inital clearance of 1. Next, expand the square symmetrically down and to the right, incrementing the clearance value each time, until no further expansions are possible. An expansion is successful only if all tiles within the square are traversable. If the clearance square intersects with an obstacle or with the edge of the map the expansion fails and the algorithm selects another tile to process. The algorithm terminates when all tiles on the map have been considered. Figure 5 highlights the process and Figure 6  shows the result on our running example.

Figure 5. Computing true clearance. (a) Initial clearance. (b). First expansion. (c) Second Expansion. (d) Third expansion (which fails)

Figure 5. After selecting a tile (a) the square is expanded twice (b, c) before intersecting an obstacle (d).

Figure 6.

Figure 6. The toymap from Figure 1, annotated with true clearance values.

Notice that by using true clearance the example from Figure 4 now succeeds in finding a solution. Infact, one can prove that using the true clearance metric it is always possible to find a solution for any agent size if a valid path exists to begin with (i.e. the method is complete; see my paper for the details).

Dealing With Multiple Terrain Types

Until now the discussion has been limited to a single type of traversable terrain (white tiles). As it turns out however, it is relatively easy to apply any clearance metric to maps involving arbitrarily many terrain types. Given a map with n terrain types we begin by first identifying the set of possible terrain traversal capabilities an agent may possess. A capability is simply a disjunction of terrains used to specify where each agent can traverse. So, on a map with 2 terrains such as {Ground, Water} the corresponding list of all possible capabilities is given by a set of sets; in this case {{Ground}, {Water}, {Ground, Water}}. Note that, for simplicity, I assume the traveling speed across all terrains is constant (but this constraint is easily lifted).

Next, we calculate and assign a clearance value to each tile for every capability. Figures 7-9 show the corresponding clearance values for each capability on our toy map; notice that we’ve converted some of the black (obstacle) tiles to blue to represent the Water terrain type (which some agents can traverse).

Figure 7. True clearance annotations for the {Ground} capability (only white tiles are traversable).

Figure 7. True clearance annotations for the {Ground} capability (only white tiles are traversable).

Figure 8. True clearance annotations for the {Water} capability (only blue tiles are traversable).

Figure 8. True clearance annotations for the {Water} capability (only blue tiles are traversable).

Figure 9. True clearance annotations for the {Ground, Water} capability (both white and blue tiles are traversable).

Figure 9. True clearance annotations for the {Ground, Water} capability (both white and blue tiles are traversable).

Theoretically this means that, at most, each tile will store 2^(n-1) clearance value annotations (again, see the paper for details). I suspect this overhead can probably be improved with clever use of compression optimisations though I did not attempt more than a naive implementation. Alternatively, if memory is very limited (as is the case with many robot-related applications) one can simply compute true clearance on demand for each tile, thus trading off a little processing speed for more space.

Annotated A*

In order to actually find a path for an agent with arbitrary size and capability we can use the venerable A* algorithm, albeit with some minor modifications. First, we must pass two extra parameters to A*’s search function: the agent’s size and capability. Next, we augment the search function slightly so that before a tile is added to A*’s open list we first verify that it is infact traversable for the given size and capability; everything else remains the same. A tile is traversable only if its terrain type appears in the set of terrains that comprise the agent’s capability and if the corresponding clearance value is at least equal to the size of the agent. To illustrate these ideas I’ve put together a simplified pseudocode implementation of the algorithm, Annotated A*:

Function: getPath
Parameters: start, goal, size, capability

push start onto open list.
for each node on the open list
        if current node is the goal, return path.
        else,
                for each neighbour of the newly opened node
                     if neighbour is on the closed list, skip it
                     else,
                          if neighbour is already on the open list, update weights
                          else,
                               if clearance(neighbour, capability) > size, push neighbour on the open list
                               else, skip neighbour
        push current node on closed list
        if openlist is null, return failure

So, that’s it! If you’re interested in playing with a working implementation of Annotated A* you can check out the source code I wrote to evaluate it. The code itself is written in C++ and based on the University of Alberta’s freely available pathfinding library, Hierarchical Open Graph (or HOG). HOG compiles on most platforms; I’ve personally tested it on both OSX and Linux and I’m told it works on Windows too. The classes of most interest are probably AnnotatedMapAbstraction, which deals with computing true clearance values for a map, and AnnotatedAStar which is the reference implementation of the search algorithm described here.

In the second part of this article I’m going to expand on the ideas presented here and explain how one can build an efficient hierarchical representation of an annotated map that requires much less memory to store and retains the completeness characteristics of the original. I’ll talk about why this is important and how one can use such a graph to answer pathfinding queries much faster than Annotated A*. At some point soon I also hope to discuss how one can use clearance-based pathfinding on continuous (non-grid) maps.

January 8, 2009

CIG’08 Conference Summary

Filed under: conference — dharabor @ 13:31
Tags: , ,

Last month I was fortunate enough to attend the IEEE Symposium on Computational Intelligence and Games (CIG’08) which was held over 4 days (15-18 December) at the University of Western Australia in sunny Perth.

For those unfamiliar with the event, the CIG series of conferences (now in its 4th iteration) has quickly established itself as a forum for academics and games industry insiders to discuss recent advances in AI and explore directions for future work. Its aims are actually not dissimilar to AIIDE (which was held a few months ago). The main difference between them seems to be that CIG has a rather strong evolution and learning flavour whereas AIIDE is more general; in the academic community the two are regarded about equal.

I was looking forward to this event because one of the keynote speakers is none other than Jonathan Schaeffer. You might have heard Jonathan solved checkers a couple of years ago but he’s also rather well known for his work on pathfinding (he co-authored the HPA* algorithm among other notable achievements).  The other keynote speakers were Penny Sweetser from 2K Australia and Jason Hutchens from Interzone (a local development house based in Perth).

Aside from the academic track, in which over 50 papers were presented, the conference also featured a number of competitions: TORCS Car Racing, Ms. Pac-Man and the 2K Bot Prize. The latter, billed as a Turing Test for bots, garnered quite a bit of interest, due mostly I suspect, to the AUD$10,000 prize on offer for any bot able to fool 4/5 human judges playing Unreal Tournament 2004.

What follows is my summary of the event; my notes are by no means exhaustive so I’ll attempt to focus on the bits I found particularly interesting.

Day 1

The first day of the conference was a warm-up of sorts with a number of tutorials on different aspects of computational intelligence from learning algorithms to opponent modeling and player satisfaction. I think the standout session for me was Simon Lucas’ talk on co-evolution and Temporal Difference Learning. TDL has long been on my never-ending “stuff-to-read-about” list and Simon is a great presenter able to clearly communicate complex ideas in a straightforward manner.

The day ended with a combined poster session and welcoming festivities. There were a number of interesting posters: Julien Kloetzer undertook a comparative study of different algorithms for solving the game of Amazons, Vukosi Marivate looked at evolving board-game agents using social learning and Stefan Wender used reinforcement learning to find city placement strategies in Civilization IV. I’m a big Civ fan so I spent a while talking to Stefan about his approach; they use the Civ scenario editor to train an offline learner that evaluates candidate locations using scoring information from the game — it’s pretty neat!

Day 2

Jonathan Schaeffer kicked off the day with a fantastic keynote on how he and his team at the University of Alberta solved Checkers over an almost 20 year period. Along the way they solved the famous 100 year checkers position and pushed the very limits of computation and storage. According to Jonathan they actually had to stop the search for 4 years during 1997 and 2001 to wait until 64bit precision machines became commonplace. Checkers has 10²⁰ possible positions (a rather large number!) so it’s very impressive that it was solved at all.

Simon Lucas looked at learning rates of evolutionary and TDL algorithms. He observes that co-evolution is very slow and suggests using TDL to bootstrap the learning process.

Pieter Spronck had a paper dealing with units in a group learning formations and moving as a whole (the effect sounds similar to flocking).

Stephen Hladky talked about estimating the positions of opponents in FPS games. His approach relies on a database of game logs (CounterStrike in this case) to figure out the probability of a player moving from one area to another. The approach seems to work well and the idea was, I thought, rather nice.

Mike Preuss discussed the use self organising maps to learn which unit was best against an opposing unit type in an RTS game.

Raphael Stuer looked at using influence maps and flocking to prevent groups of units getting killed while pathfinding through enemy territory. In their implementation, agents path around areas influenced by strong enemy structures (like walls of towers) and plow through easy enemy structures when the group is at an advantage. Very cool!

Day 3

Penny Sweetser from 2K Australia started the day with an interesting keynote on emergence in games in order to improve the player experience. Penny posited that emergence in games should be limited within certain restricted parameters rather than sandbox-style emergence (local rather than global emergence). The talk was actually quite broad in scope, covering alot of ground from emergent unit behaviour to narrative.

Bobby Bryant discussed his progress in trying to evolve a visual controller for FPS shooters. The main idea involves training a genetic algorithm to recognise visual input from a low-resolution “retina” in order to create aiming behaviours for bots.

John Reede had a neat paper about using Modular Neural Nets for agent control in a realtime top-down shooter. John’s controller has two components, one for movement and the other for shooting; each trained independently of the other. The main advantage of MNNs seems to be that they can speed up the learning process significantly and evolve better fitness functions.

There were 3 competitions run during the afternoon:

  • Ms. PacMan (won by Marcus Gallagher)
  • TORCS Car Racing (won by Luigi Cardamone [ed: Thanks Julian!])
  • 2K Bot Prize (won by a team from the Czech Republic)

I spent most of my time at the bot prize chatting to participants and spectators. Winning the major prize of the competition involved building a bot able to play Unreal Tournament such that 4 of the 5 judges are fooled into thinking they are playing a human. Unfortunately the winning entry, AMIS, did not win the major prize (only 2 of the 5 judges were fooled) but there was a minor prize on offer and nice looking trophy.

I took a few photos during the competition to try and capture the flavour:

Day 4

Jason Hutchens from Interzone Games opened the final day with a keynote about pseudo-intelligence as entertainment.  Jason’s talk centred around a major problem of AI game developers: building surprising yet relevant behaviour.  In the talk Jason posited that game AI is a Turing test where players are the judge. Learning and other emergent behaviours make this goal difficult to achieve so AI developers cheat. Jason also discussed some of the challenges faced by game developers, including tight deadlines, camera problems, terrain analysis, dynamic obstacles, spatial awareness and building better tools to debug AI.

One of the interesting papers of the day came from Marcus Gallagher who discussed his approach for winning the Ms. Pac Man competition. Marcus described a simple (but effective) system using influence maps (potential fields really) to drive Pac Man away from ghosts and towards pills.

Mark Wittkamp discussed a learning method for improving group behaviour for ghosts in Pac-Man. The idea was interesting but the overall result a little disappointing; Pac Man only dies 1/6 more often when compared to the original ghost AI from 30 years ago (using a hand-coded Pac-Man bot to train against).

Toward the end of the conference I gave my talk on efficient pathfinding approaches for variable-size units in heterogeneous terrain environments. My paper describes HAA*, a hierarchical path planner that builds a single compact graph to represent complex multi-terrain game maps. I believe this is the first time this problem has been thoroughly studied in the academic literature. I spoke to a few game developers at the conference  who had come across this issue; it seems they solve it by storing different copies of the map: one for each agent size/terrain-traversal capability combination.  I believe HAA* should be much more (space) efficient and I hope to discuss the technique at length in an upcoming post. In the meantime, you can read the paper, take a look at my presentation slides or play around with the source code.

Blog at WordPress.com.