Steve Jobs passed away today. A profoundly influential man, his work touched the lives of millions. Including mine. I wanted to reflect on his passing and, since I don’t really have an alternate forum, I’ll share my story here.
In late 1991 I received my first machine: an Apple II/c. I remember a feeling of intense disappointment as my father proudly started unpacking and connecting up the little cream-coloured device. This was nothing like the IBM PCs and Amigas I’d spent so many weeks reading and dreaming about. There was no hard drive, no sound card, nowhere to insert a CD. Just a 5.25″ drive and a box of ancient software. Hand written labels adorned the upper-right portion of most disks, each title more cryptic than the last: “MousePaint”, read one; “Logo”, another. Instead of a 14″ monitor and glorious colour graphics, I found myself staring at a 9″ monochrome green display. The thing made funny beeping noises when you switched it on. For goodness sake, it had a handle. What could he have been thinking?
I never uttered these thoughts out loud. The look on my father’s face then was one of sheer triumph and delight. How could I break his heart? I feigned excitement and soldiered on, determined to make the best of it. Then, something happened: I fell in love.
Over the weeks and months that followed I became obsessed with knowing my Apple II/c. I explored every disk. I pored over every manual. I spent countless hours with friends adventuring in Daventry, guiding Captain Goodnight through the Islands of Fear and helping a little spider reach the attic of a cider factory. I also started programming.
10 PRINT 'HELLO WORLD' 20 GOTO 10
I was never a child prodigy; I didn’t write my own games from scratch or teach myself assembler. I copied thousands of lines of BASIC from reference books so I could play Lunar Lander. I drew patterns in Logo. It wasn’t much, but it was enough. The Apple II/c sparked my curiosity and fired my imagination as no contemporay machine ever could. There was a simple elegance to the way everything worked that encouraged exploration. The BASIC interpreter, for example, didn’t come on a disk: it was built in and started automatically. “Here I am” it would say, blinking its little green cursor encouragingly whenever you’d forget to insert a boot disk, “Take me for a spin. Let’s do something amazing together”.
I didn’t know it then but those early fumbled efforts were the first flirtatious steps of a life-long romance with computing and Computer Science. I eventually graduated to larger and more powerful machines: a 486 PC some years later, a Pentium thereafter. From BASIC I went to Pascal then C and now to… everything. Yet despite all those changes it was that little machine, that Apple II/c, which helped me discover a passion that continues to shape who I am even to this day.
Now, 20 years removed from those halcyon days, I once more find myself the happy owner of an Apple computer. Its form may have changed in the intervening period but its role in my life has not: It helps connect me. It entertains me. It turns my imagination into reality.
Thank you father, thank you Apple, and thank you Steve.
]]>In this article I describe Jump Point Search (JPS) [3]: an online symmetry breaking algorithm which speeds up pathfinding on uniform-cost grid maps by “jumping over” many locations that would otherwise need to be explicitly considered. JPS is faster and more powerful than RSR: it can consistently speed up A* search by over an order of magnitude and more. Unlike other similar algorithms JPS requires no preprocessing and has no memory overheads. Further, it is easily combined with most existing speedup techniques — including abstraction and memory heuristics.
Please note: this work was developed in conjunction with Alban Grastien. It forms part of my doctoral research into pathfinding at NICTA and The Australian National University. Electronic copies of the paper in which this idea was first described are available from my homepage.
JPS [3] can be described in terms of two simple pruning rules which are applied recursively during search: one rule is specific to straight steps, the other for diagonal steps. The key intuition in both cases is to prune the set of immediate neighbours around a node by trying to prove that an optimal path (symmetric or otherwise) exists from the parent of the current node to each neighbour and that path does not involve visiting the current node. Figure 1 outlines the basic idea.
Node x is currently being expanded. The arrow indicates the direction of travel from its parent, either straight or diagonal. In both cases we can immediately prune all grey neighbours as these can be reached optimally from the parent of x without ever going through node x.
We will refer to the set of nodes that remain after pruning as the natural neighbours of the current node. These are marked white in Figure 1. Ideally, we only ever want to consider the set of natural neighbours during expansion. However, in some cases, the presence of obstacles may mean that we need to also consider a small set of up to k additional nodes (0 ≤ k ≤ 2). We say that these nodes are forced neighbours of the current node. Figure 2 gives an overview of this idea.
Node x is currently being expanded. The arrow indicates the direction of travel from its parent, either straight or diagonal. Notice that when x is adjacent to an obstacle the highlighted neighbours cannot be pruned; any alternative optimal path, from the parent of x to each of these nodes, is blocked.
The details of the recursive pruning algorithm are reasonably straightforward: to ensure optimality we need only assign an ordering to how we process natural neighbours (straight steps before diagonal). I will not attempt to outline it further here; the full details are in the paper and my aim is only to provide a flavour for the work. Figures 3 gives two examples of the pruning algorithm in action. In the first case we identify a jump point by recursing straight; in the second case we identify a jump point by recursing diagonally.
(a) We recursively apply the straight pruning rule and identify y as a jump point successor of x. This node is interesting because it has a neighbour z that cannot be reached optimally except by a path that visits x then y. The intermediate nodes are never explicitly generated or even evaluated.
(b) We recursively apply the diagonal pruning rule and identify y as a jump point successor of x. Notice that before each diagonal step we first recurse straight (dashed lines). Only if both straight recursions fail to identify a jump point do we step diagonally again. Node w, which is simply a forced neighbour of x, is generated as normal.
Properties 1-3 are interesting theoretical results, and rather surprising, but I will not address them further here. My main objective in this article is simply provide a flavour for the work; for a full discussion, including proofs, please see the original paper [3]. Property 4 is perhaps of broadest practical interest so I will give a short summary of our findings below.
We evaluated JPS on four map sets taken from Nathan Sturtevant’s freely available pathfinding library, Hierarchical Open Graph. Two of the benchmarks are realistic, originating from popular BioWare video games Baldur’s Gate II: Shadows of Amn and Dragon Age: Origins. The other two Adaptive Depth and Rooms are synthetic though the former could be described as semi-realistic. In each case we measured the relative speedup of A* + JPS vs. A* alone.
Briefly: JPS can speed up A* by a factor of between 3-15 times (Adaptive Depth), 2-30 times (Baldur’s Gate), 3-26 times (Dragon Age) and 3-16 times (Rooms). In each case the lower figure represents average performance for short pathfinding problems and the higher figure for long pathfinding problems (i.e. the longer the optimal path to be found, the more benefit is derived from applying JPS).
What makes these results even more compelling is that in 3 of the 4 benchmarks A* + JPS was able to consistently outperform the well known HPA* algorithm [1]. This is remarkable as A* + JPS is always performing optimal search while HPA* is only performing approximate search. On the remaining benchmark, Dragon Age, we found there was very little to differentiate the performance of the two algorithms.
Caveat emptor: It is important to highlight at this stage that A* + JPS only achieves these kids of speedups because each benchmark problem set contains a large number of symmetric path segments (usually manifested in the form of large open areas on the map). In such cases JPS can exploit the symmetry and ignore large parts of the search space. This means A* both generates and expands a much smaller number of nodes and consequently reaches the goal much sooner. When there is very little symmetry to exploit however we expect that our gains will be more modest.
Another exciting aspect of this work is the possibilities it opens for further research. For example: could we pre-process the map and go even faster? Or: are there analogous jumping rules that one could develop for weighted grids? What about other domains? Could we apply the lessons learned thus far to help solve other interesting search problems? The answers to the first two questions already appear to be positive; the third is something I want to explore in the near future. Regardless of how it all turns out, one thing is certain: it’s an exciting time to be working on pathfinding!
[2] D. Harabor, A. Botea, and P. Kilby. Path Symmetries in Uniform-cost Grid Maps. In Symposium on Abstraction Reformulation and Approximation (SARA), 2011.
[3] D. Harabor and A. Grastien. Online Graph Pruning for Pathfinding on Grid Maps. In National Conference on Artificial Intelligence (AAAI), 2011.
In part one I introduced the notion of path symmetry: a property of uniform-cost grid maps which can significantly slow down optimal search. In this article I discuss Rectangular Symmetry Reduction (RSR for short): a new pre-processing algorithm that explicitly identifies and eliminates symmetries from uniform cost grid maps. RSR [1] [2] is simple to understand, quick to apply and has low memory overheads. When combined with a standard search algorithm, such as A*, RSR can speed up optimal pathfinding by anywhere from several factors to an order of magnitude. These characteristics make RSR highly competitive with, and in many cases better than, competing state-of-the-art search space reduction techniques.
Please note: this work was developed in conjunction with Adi Botea and Philip Kilby. It forms part of my doctoral research into pathfinding at NICTA and The Australian National University. Electronic copies of the papers in which these ideas were first described are available from my homepage.
(This remainder of this section, and the next, give a mechanical overview of the algorithm and its properties. If you’re impatient, or don’t care about such things, you can skip ahead and check out some screenshots.)
RSR can be described in 3 simple steps. Steps 1 and 2 are applied offline; their objective is to identify and prune symmetry from the original grid map. The third step is an online node insertion procedure; its objective is to preserve optimality when searching for paths in the symmetry-reduced grid map.
I will focus on points 2 and 4 in the remainder of this section. A thorough discussion of points 1 and 3 (including proofs) can be found in [1].
Memory Requirements:
In the most straightforward implement of RSR we need to store the id of the parent rectangle for each of the n traversable nodes in the original grid. We also need to store the origin and dimensions of each rectangle (macro edges can be identified on-the-fly and do not need to be stored explicitly). This means that, in the worst case, we might need to store up to 5n integers. In practice however we can usually do much better. For example: there is little benefit in storing or assigning nodes to rectangles with few or no interior nodes (1×1, 2×1, 2×2, 3×2, 3×3 etc.). We can also avoid the parent id overhead altogether and only store the set of identified rectangles. The only downside is that, during insertion (Step 3 above), we now need to search for the parent rectangle of the start and goal — instead of being able to identify it in constant time.
Performance:
We evaluated RSR on a number of benchmark grid map sets taken from Nathan Sturtevant’s freely available pathfinding library, Hierarchical Open Graph. One of the map sets is from the game Baldur’s Gate II: Shadows of Amn. The other two map sets (Rooms and Adaptive Depth) are both synthetic, though the latter could be described as semi-realistic. I will give only a short summary of our findings; full results and discussion is available in the following papers: [1] [2].
In short: we observed that in most cases RSR can consistently speed up A* search by a factor of between 2-3 times (Baldur’s Gate), 2-5 times (Adaptive Depth) and 5-9 times (Rooms). In some cases the gain can be much higher: up to 30 times.
We found that the extent of the speed gains will be dependent on the topography of the underlying grid map. On maps that feature large open areas or which can be naturally decomposed into rectangular regions, RSR is highly effective. When these conditions do not exist, the observed speed gains are more modest. This performance is competitive with, and often better than, Swamps [4]; another state-of-the-art search space reduction technique. We did not identify a clear winner (each algorithm has different strengths) but did notice that the two methods are orthogonal and could be easily combined.
[2] D. Harabor and A. Botea. Breaking Path Symmetries in 4-connected Grid Maps. In AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2010.
[3] D. Harabor and A. Grastien. Online Graph Pruning for Pathfinding on Grid Maps. In National Conference on Artificial Intelligence (AAAI), 2011.
[4] N. Pochter, A. Zohar, J. S. Rosenschein and A. Felner. Search Space Reduction Using Swamp Hierarchies. In National Conference on Artificial Intelligence (AAAI), 2010.
In parts two and three I will give a technical overview of both RSR and JPS, discussing their strengths, weaknesses and observed performance on two popular video-game benchmarks kindly made available by Nathan Sturtevant and BioWare: Baldur’s Gate II and Dragon Age: Origins.
Please note: this work was developed in conjunction with my co-authors: Adi Botea, Alban Grastien and Philip Kilby. It forms part of my doctoral research into pathfinding at NICTA and The Australian National University. Electronic copies of the papers mentioned in this article are available from my homepage.
Before proceeding further it is necessary to establish some precise terminology. For example: what exactly is a path? Traditionally, Computer Scientists define paths as ordered sequences of connected edges. The conjunction of these edges represents a walk in the search graph from some arbitrary start location to some arbitrary goal location. However this definition is too general to capture the idea of symmetry in grid maps. For that, we need a slightly different notion of a path:
The distinction between edges and vectors is an important one as it allows us to distinguish between paths that are merely equivalent (i.e. of the same length) and those which are symmetric. But what, exactly, does it mean to have a symmetric relationship between paths?
As a clarifying example, consider the problem instance in Figure 1. Each highlighted path is symmetric to the others since they all have the same length and they all involve some permutation of 9 steps up and 9 steps right.
In the presence of symmetry a search algorithm such as A* is forced to explore virtually every location along each optimal path. In Figure 1, depending on how we break ties, A* might expand every node on the entire map before reaching the goal. Further, A* will usually consider many other nodes that appear promising but that are not on any optimal path. For example, each time A* expands a node from the fringe of the search, it has already likely found almost every symmetric shortest path leading to that node (again, depending on how we break ties when two nodes appear equally good). But this effort is in vain if these expansions do not lead the search closer to the goal and instead into a dead-end. Thus arises the question which I will attempt to answer in this article: how do we deal with symmetry when pathfinding on grid maps?
A large number of techniques have been proposed to speed up pathfinding. Most can be classified as variations on three themes:
Algorithms of this type are fast and use little memory but compute paths which are usually not optimal and must be refined via further search. Typical examples: HPA* [2], MM-Abstraction [9].
Algorithms of this type usually pre-compute and store distances between key pairs of locations in the environment. Though fast and optimal, such methods can incur signficant memory overheads which is often undesirable. Typical examples: Landmarks [6], True Distance Heuristics [10].
Algorithms of this type usually aim to identify areas on the map that do not need to be explored in order to reach the goal optimally. Though not as fast as abstraction or memory heuristics, such methods usually have low memory requirements and can improve pathfinding performance by several factors. Typical examples: Dead-end Heuristic [1], Swamps [8], Portal Heuristic [7].
My work in this area can be broadly classified as a search space pruning technique. Where it differs from existing efforts is that, instead of trying to identify areas that do not have to be crossed during search, I aim to identify and prune symmetric nodes that prevent the fast exploration of an area. This idea nicely complements existing search-space reduction techniques and, as it turns out, also complements most grid-based abstraction methods and memory heuristic approaches.
My co-authors and I have developed two distinct approaches for explicltly identifying and eliminating path symmetries on grid maps. I outline them here very briefly and in more detail in an upcoming post:
RSR [4][5] can be described as a pre-processing algorithm which identifies symmetries by decomposing the grid map into empty rectangular regions. The central idea, then, is to avoid symmetries by only ever expanding nodes from the perimeter of each rectangle and never from the interior.
JPS [3] consists of two simple neighbour pruning rules that are applied recursively during search. Each rule considers the direction of travel to a given node from its parent (either a straight step or a diagonal step) in order to prune the set of local neighbours (tiles) around that node. The central idea is to avoid any neighbours that could be reached optimally from the parent of any given node. In this way we are able to identify and avoid, on-the-fly, large sets of symmetric path segments; such as those in Figure 1.
RSR can pre-process most maps in under a second and has a very small memory overhead: we need to keep only the id of the parent rectangle each node is associated with (and the origin and dimensions of each such rectangle). By comparison, JPS has no pre-processing requirements and no memory overheads. Both algorithms are optimal and both can speed up A* by an order of magnitude and more. Figure 2 shows a typical example; For reference, I also include a screenshot for vanilla A*. You can click on each image to make it larger.
Next time I will explain, in more depth, the mechanical details of both RSR and JPS. In the meantime, electronic copies of the research papers in which RSR [4][5] and JPS [3] were originally described are available from my homepage.
If you have any questions or comments on this work, I’d be glad to hear from you. Please leave a message below or, if you prefer, drop me an email instead (my contact details are on the About page).
Continue to Part 2.
Continue to Part 3.
[1] Y. Björnsson and K Halldörsson Improved Heuristics for Optimal Path-finding on Game Maps. In AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2006.
[2] A. Botea, M. Müller, and J. Schaeffer. Near Optimal Hierarchical Path-finding. In Journal of Game Development (Issue 1, Volume 1), 2004.
[3] D. Harabor and A. Grastien. Online Graph Pruning for Pathfinding on Grid Maps. In National Conference on Artificial Intelligence (AAAI), 2011.
[4] D. Harabor, A. Botea, and P. Kilby. Path Symmetries in Uniform-cost Grid Maps. In Symposium on Abstraction Reformulation and Approximation (SARA), 2011.
[5] D. Harabor and A. Botea. Breaking Path Symmetries in 4-connected Grid Maps. In AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2010.
[6] A. V. Goldberg and C. Harrelson. Computing The Shortest Path: A* Search Meets Graph Theory. In SIAM Symposium on Discrete Algorithms (SODA), 2005.
[7] M. Goldenberg, A. Felner, N. Sturtevant and J. Schaeffer. Portal-Based True-Distance Heuristics for Path Finding. In Symposium on Combinatorial Search (SoCS), 2010.
[8] N. Pochter, A. Zohar, J. S. Rosenschein and A. Felner.
Search Space Reduction Using Swamp Hierarchies. In National Conference on Artificial Intelligence (AAAI), 2010.
[9] N. R. Sturtevant. Memory-Efficient Abstractions for Pathfinding. In AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2007.
[10] N. R. Sturtevant, A. Felner, M. Barrer, J. Schaeffer and N. Burch. Memory-Based Heuristics for Explicit State Spaces. In International Joint Conference on Artificial Intelligence (IJCAI), 2009.
Steadily forward march the hands of the clock.
The paper needs writing, results must be got;
Forget about sleep or all is for naught!
With each passing second the conference draws nearer,
To obtain tenure you must publish here.
Tick, Tock. Tick, Tock.
Redouble your efforts ye graduate student,
And condense into pages no greater than six,
The sum of your efforts these many long weeks.
Spell out in sections, as short as can be,
The problem you solve and your neat novelty.
Whether by theory or experimental design,
Argue the case that your approach is sublime!
Cite all prior work and conclude on a high,
Then compile your sources one final time.
Press upload and hope for the best,
In six weeks you find out if your work’s passed the test!
Tick, Tock. Tick, Tock.
With the deadline past and the system now closed,
The program commitee take up their roles.
On secretive forums, in groups of three,
They meet to discuss what will and won’t be.
The schedule is short and there’s lots of bidding,
Each submission receives only a few hours reading.
Each reviewer’s position: your paper’s rejected;
Your job was to prove that it should be accepted!
The standards are high and acceptance rates low,
The hopes of some authors will suffer a blow.
Yet should worst come to worst, you need never fear;
If you don’t make it now, there’s always next year!
One of the (few) perks of being a research student is attending conferences. There are many reasons to attend a conference: you get to travel to places you probably wouldn’t otherwise visit, toss around ideas with other people who work in the same research area and get a glimpse into the state of the art approaches for problems you might not ever have considered.
Last week I was fortunate enough to attend the 2010 Conference on Artificial Intelligence and Interactive Digital Entertainment (otherwise known as AIIDE). A relative newcomer to the scene (currently in its 6th iteration) AIIDE has developed a bit of a reputation as being the venue at which the games industry and AI academia meet. From a student’s point of view AIIDE is attractive for two reasons:
This year’s event, held at Stanford University from the 11th to the 13th of October, promised to be no different. On the cards were several interesting keynote speakers, a StarCraft AI competition and an entire day of pathfinding goodness (including my own humble contribution to speeding up search). Was I excited? You bet!
Day 1
The conference kicked off in style with a wonderful keynote from industry veteran Chris Jurney (slides). Known for his work on games such as Company of Heroes and Dawn of War 2, Chris talked about some of the challenges he’s faced implementing efficient AI systems in real-time strategy games. Perhaps the most surprising revelation (for me) was the critical role that pathfinding plays in Chris’ work. For example:
Chris also touched on some of the difficulties he encountered when modeling different size units (a problem close to my heart) and finished by outlining an interesting rule-based melee system which was employed in Dawn of War 2. It was a complex dance of 8-10 weight-based heuristic features (such as retaliation, decision inertia and others) which all had to be balanced in order to achieve nicely coreographed battles of up to 100 units at a time. Impressive stuff!
Paper Highlights
Perceptually Realistic Behavior through Alibi Generation
Ben Sunshine-Hill, Norman BadlerOne of the first talks of day and also among the most interesting. This paper deals with with the problem of generating intent (or alibis) for unscripted NPCs the player might be observing in large open-world games. The authors have developed a novel system capable of efficiently generating increasingly more detailed plans for NPCs so it looks like they know what they’re doing rather than stupidly wandering around with no purpose.
Learning Companion Behaviors Using Reinforcement Learning in Games
AmirAli Sharifi, Duane Szafron, Richard ZhaoDuane Szafron presented a reinforcement learning approach for developing the behaviour of companion NPCs in story-driven games such as Neverwinter Nights. The main motivation of the work appears to be that party members should react to the player in a manner consistent with how the player has treated them thus far. Pretty neat!
Using Machine Translation to Convert Between Difficulties in Rhythm Games
Kevin Gold, Alexandra OlivierA very engaging talk from Kevin Gold, this paper outlines an automatic system for analysing note sequences in 1-dimensional rhythm games like Guitar Hero. The aim is to take in the sequence of notes for a an expert difficulty song and automatically generate sequences for the easy, medium and hard difficulties. It’s an interesting problem because, according to Kevin, game designers at Harmonix have to manually encode the different sequences of notes which appear for every song in each of the 4 available difficulties. The authors analysed the song corpus of Guitar Hero 1 and 2 (the notes are apparently stored as midi files on the disc) and developed a novel bayesian method for predicting which notes to drop when converting between difficulties. Perhaps the most remarkable aspect of the work is the accuracy of the reported method: in many cases the generated songs match the official versions over 80% of the time. Nice work guys!
Day 2
A couple of interesting keynotes on the second day:
Another highlight of the day was a Man vs Machine match between one of the best-performing entrants in the StarCraft AI competition and former pro-gamer =DoGo= (a Brood War veteran from the 2001 WCG). The take-home message was clear: AI sucks. The computer opponent was easily crushed in a very one-sided match. Interestingly however, the same AI won more often than not when pitted against the default Blizzard bots; so it’s probably more than match for most casual players.
Paper Highlights
An Offline Planning Approach to Game Plotline Adaptation
Boyang Li, Mark RiedlBoyang Li discussed an interesting partial-order planning approach for tailoring role-playing game plots to particular player preferences. The idea is to feed an AI planner a set of player preferences (preferred quest types etc), a corpus of quests (made up of tasks, actions, rewards etc) and have it generate a sound and coherent story. It’s a nice idea, particularly as the described system appears to be able to repair existing plots to deal with changing preferences.
Adversarial Navigation Mesh Alteration
D. Hunter Hale, G. Michael YoungbloodThis work studies competing groups of units operating in a Capture The Flag environment. Each group of units is able to affect various changes on the world; for example, by constructing obstacles to stop opposing units from reaching their flag or tearing down obstacles to reach an enemy flag (or both). An interesting aspect of the research is that the pathfinding routines of each group of agents are modified to take into account the cost of making changes to the world geometry — so, for example, rather than just computing the cost of going around an obstacle units are also able to compute the cost of going through it. It’s a neat idea because it allows more complex agents to be created without actually adding complex agent design overheads.
Terrain Analysis in Real-time Strategy Games: Choke Point Detection and Region Decomposition
Luke PerkinsAnother pathfinding-related paper, this work focuses on decomposing game maps (competition maps from StarCraft in this case) in order to identify choke points. The ultimate objective is to partition the environment into a series of different areas which can be used by higher-level AI routines to (for example) facilitate efficient navigation and coordinate strategic planning. The decompositions I saw looked very promising indeed; the algorithm appears to be very effective at partitioning the map into strategic areas (usually convex in shape and of near-maximal size) which are connected to each other only by transitions at choke-point locations. Nice job Luke!
Day 3
The third day opened with a keynote from Microsoft Researcher Sumit Basu. According to the program, Sumit’s research focuses on making music generation more accessible to non-musicians. I wish I could give a detailed account of the talk but sadly I missed it — I was too busy double and triple checking my slides for later in the day!
I did manage to arrive at the conference in time for the annoucements of the StarCraft AI competition winners however.
28 bots were submitted and four tournaments were held:
Aside from eternal bragging rights, each winner received a limited edition copy of StarCraft 2 which was signed by the development team at Blizzard — sweet! Well done guys!
Paper Highlights
A Comparison of High-Level Approaches for Speeding Pathfinding
Nathan Sturtevant, Robert GiesbergerThis work looked at the efficacy of a number of different approaches for speeding up pathfinding. In his talk Nathan compared the hierarchical pathfinding algorithm used in the recent roleplaying game Dragon Age: Origins with two less known but very fast approaches: Differential Heuristics and Contraction Hierarchies. The former is an efficient memory-based heuristic that reduces pathfinding to a constant number of table lookup operations while the latter is a new technique which speeds up search by adding so called shortcut edges into a graph. The most interesting result, in my opinion, were the reported performance figures for the Contraction Hierarchies method. It appears that, when tuned correctly, this algorithm is several times faster than standard A* search and in some cases even competitive with the suboptimal hierarchical pathfinder. Nice!
DHPA* and SHPA*: Efficient Hierarchical Pathfinding in Dynamic and Static Game Worlds
Alex Kring, Alex J. Champandard, Nick SamarinThis paper, presented by Alex Kring, introduced two variations on the well known HPA* algorithm. Both SHPA* and DHPA* aim to eliminate the insertion overhead associated with HPA* but the latter also makes use of lookup tables to improve the performance of search when traversing individual clusters. Alex went on to discuss the applicability of DHPA* to maps in which the terrain features dynamic obstacles and touched on one of the less studied aspects of pathfinding that game developers often face: parallel processing.
Breaking Path Symmetries in 4-connected Grid Maps (pdf)
Daniel Harabor, Adi BoteaMy own submission to this year’s conference is a novel technique for speeding up pathfinding search. Rather than invent a new algorithm to go fast, we instead develop a technique for reducing the size of the search space by eliminating symmetry. Our approach is simple: we decompose an arbitrary grid map into a series of empty rectangular rooms and discard all tiles from the interior of each room — leaving only a boundary perimeter. Finally, we connect each pair of tiles on directly opposite sides of the perimeter. It turns out that this simple strategy can speed up A* (and indeed, any search algorithm) by 3-4 times without any loss of generality. I think the most exciting aspect of this work however is that it is completely orthogonal to all existing pathfinding optimisations. Given how simple and effective the technique can be, I hope that it will be noticed by performance hungry game developers in the near-future.
There were a number of interesting presentations at the conference, including a few neat pathfinding-related talks (which I discuss in an earlier summary) but the really cool thing is the other stuff the organisers have made available:
All 3 Keynote talks:
Slides from the various tutorials:
And of course copies of all 53 accepted papers.
I’m very impressed that the organisers were kind enough to make all these resources available. In my experience, it’s above and beyond what one could reasonably expect from similar events.
Together with the recent decision open the AAAI digital library to the public, this gives me hope that we’re seeing a move away from subscription-based models for accessing academic literature. Now if only someone could convince those pesky Journal archives to do the same. Elsevier, I’m looking at you!
]]>But what happens when you can’t (or don’t want to) work with grids? More specifically, how do you deal with agent diversity in continuous environments? As it turns out this problem can be solved with a well known method from the field of robotics: the Probabilistic Roadmap (or PRM for short). PRMs are highly suited to capturing topographical information in complex multi-dimensional environments and have been effectively used for robot pathfinding since their inception in the mid 90s.
In this article I want to describe a recent PRM method which partly inspired my work on HAA*; it’s based on a paper by [Niuwenhuisen, Kamphuis, Mooijekind and Overmars] that was presented at CGAIDE’04. This is a really neat piece of research which makes several contributions:
Since the scope is quite broad and my interest rather more narrow, I will only discuss the first aspect. If you’d like to know more, I recommend taking a look at the paper itself; it’s very well written and highly readable.
Traditional roadmap construction methods usually sample an environment by selecting a point and checking if it is collision-free for (some configuration of) a robot (Figure 2). The resultant PRM is thus only applicable to the robot for which is it was created (or one smaller in size). If we try to re-use the PRM for a larger robot there is no guarantee that any computed path will be valid (Figure 3).
To solve this problem the authors propose a PRM construction method that uses the Voronoi diagram of the map as a guide. A Voronoi diagram is simply a division of a geometric space into regions in which every location is either closest to one particular obstacle or otherwise equidistant to several different obstacles (Figure 4).
The PRM construction method is very straightforward:
Figure 5 illustrates a typical result. Notice that the roadmap always retains maximum clearance from all obstacles at all times. Each node on the PRM (or backbone path) is annotated with a clearance value that represents the radius of a maximal bounding sphere centred at that position. Because the nodes on the roadmap are close together they form an overlapping clearance corridor which can be used to answer pathfinding queries for (circular shaped) agents of arbitrary size. Pretty cool, huh?
Although I quite like PRMs there are some limitations to be aware of. Perhaps the biggest is that all PRMs suffer a common ailment: representational incompleteness. Simply put, if a path exists in the original map, there is no guarantee that it will be found using a PRM. Actually, to be more precise, PRMs are only probabilistically complete, which is a fancy way of saying that as the sampling coverage of the terrain approaches 100% so too does the probability that the PRM can be used to find any valid path present in the original environment. Despite this property, PRMs have been shown to work quite well in practice, solving a large number of problems in very complex environments. The key to building a good PRM is the sampling strategy; a sufficient number of points have to be selected to reasonably approximate the original environment. How much is “sufficient” will depend on the situation; there are no hard-and-fast rules.
Another disadvantage of PRMs is that they only represent the terrain traversal requirements for one capability (i.e. all areas are either traversable by all agents or obstacles for all agents). This is in stark contrast to other pathfinding approaches, like HAA*, which build abstractions that reflect traversal requirements for all possible capabilities. One way to solve this problem is to simply build several PRMs, one for each distinct terrain traversal capability.
So that about wraps it up for today. As always, please feel free to contact me with any questions, comments or corrections you might have regarding this article.
]]>Dennis Nieuwenhuisen, Arno Kamphuis, Marlies Mooijekind, Mark H. Overmars,
Automatic Construction of Roadmaps for Path Planning in Games,
International Conference on Computer Games: Artificial Intelligence, Design and Education (2004)
pages 285-292
(pdf)Harabor D. & Botea, A.,
Hierarchical path planning for multi-size agents in heterogeneous environments,
IEEE Symposium on Computational Intelligence and Games (2008),
(pdf) (presentation)
Annotated A* operates on a grid map which is annotated with clearance values. A clearance value is simply a distance-to-obstacle metric that provides us with a useful indicator of how much traversable space there is at a given location. We can use clearances to quickly work out which parts of a map are traversable for an arbitrary agent by simply comparing the agent’s size with the clearance value associated with a particular location. Figures 1-3 show the associated clearance value annotations for 3 capabilities on a small map with 2 traversable terrains: Ground (white tiles) and Water (blue tiles). A capability is defined as a set of terrains used to specify which parts of the environment are traversable for an agent.
In this second piece I want to discuss how to take this simple pathfinding system and speed it up significantly through the use of hierarchical abstraction. The main problem here is that a multi-terrain map annotated with clearance values contains lots of topographical information but abstraction methods can only hope to approximate the original. Thus, the goal is to build an abstraction which is representationally complete i.e. if a path between two points can be found on the original annotated map, we should be able to also find it using only the abstract map. To achieve this we’re going to attack the problem in 3 steps:
In order to create an an abstract representation of an annotated map we follow Botea et al and divide the environment into a series of discrete square-sized areas called clusters. Figure 4 shows the result of this step on our running example using clusters of size 5×5.
Next, we identify entrances between each pair of adjacent clusters. An entrance is defined as an obstacle-free area of maximal size that exists along the border between two clusters (Figure 5a). Each entrance has associated with it a transition point which reflects the fact that an agent can traverse from one cluster to the other. To this end we select from each entrance a pair of adjacent tiles, one from each cluster, which maximise clearance (Figure 5b). Each transition point is represented in the abstract graph by two nodes connected with a single inter-edge of weight 1.0. The inter-edge is also annotated with the corresponding capability and clearance value that reflect the traversability requirements of the transition point.
Finally, we complete the abstraction by finding all possible ways to traverse each cluster. We achieve this by running Annotated A* (AA*) between each pair of abstract nodes inside a cluster; one search for each combination of possible agent sizes and capabilities. Every time AA* successfully finds a path we add a new intra-edge between the start and goal nodes. The weight of the edge is equal to the length of the path and we further annotate it with the size and capability parameters used by AA* to reflect the traversability requirements of the path. This approach is highly effective at capturing all pertinent topographical details of the original map. Infact, it is easy to see that the resultant graph (which is termed an initial abstraction) is representationally complete since we’ve identified all possible transitions between each pair of adjacent clusters and all possible ways to traverse each cluster (Figure 6).
An interesting observation made during the abstraction-building process was that on many maps the same path was often returned by AA* when searching between the same pair of points with different sizes and capability parameters. This indicates that our abstract graph actually contains unnecessary or duplicated information (and is also needlessly large as a result). To solve this problem we’re going to apply two edge dominance approaches to prune unnecessary nodes an edges. I want to avoid describing these ideas formally (the details are in the paper) and instead focus on the intuition behind each one.
The first, termed strong dominance, states that if two edges, both of identical length, connect the same pair of nodes and both are traversable by the same capability then we need only retain the one with the largest clearance. Figure 1 illustrates this idea. It is easy to see that using this approach preserves representational completeness; any agent able to traverse the edge with smaller clearance is also able to traverse the one with larger clearance. The resultant graph is termed a high-quality abstraction because we discard only duplicate information fromt the graph.
The second approach, termed weak dominance, is focused on minimising the number of transitions between clusters. To achieve this the algorithm analyses pairs of inter-edges that connect the same two clusters and attempts to prove that if one is removed the representational completeness of the graph is maintained. The effect is that only transitions which are traversable by the largest number of agents are retained. This is similar to the way motorists frequently prefer to travel between locations via freeways, which are traversable by many kinds of vehicles, instead of opting for more direct offroad travel. Figure 8 illustrates this idea; again, it is easy to see that the application of weak dominance also preserves representational completeness. The resultant graph is termed a low-quality abstraction, to reflect the fact that some connectivity information is lost in the process.
Choosing which dominance technique to use will depend on the exact requirements of the target application. Empirical results have shown that in both cases the amount of memory required to store the abstract graph can be a small fraction of that required by the original map. (though the exact number depends on a range of factors which are discussed in the paper). Comparatively speaking, low quality abstractions can be more than 50% smaller than their high quality counterparts but there is a small tradeoff. In particular, high quality abstractions produce, on average, paths with lengths 3-5% longer than optimal while low quality abstractions are in the 6-10% range.
The process of finding a path using an abstract graph is a straightforward one that will be familiar to anyone who has come across the HPA* algorithm:
As with Annotated A*, the search algorithm from Step 2 is modified slightly to accept two extra parameters: the agent’s size and capability. In addition, the search function is also also augmented so that before a node is added to A*’s open list we first verify that it is infact reachable. In this case, a node is reachable only if the edge connecting it to the current node is traversable for the given size and capability parameters. The resultant algorithm is termed Hierarchical Annotated A* (HAA* for short).
As with other hierarchical planners, HAA* is shown to be very fast. I analysed its performance on a large set of problems (over 140K) using both small and large maps from Bioware’s popular RPG Baldur’s Gate. Since there are no similar pathfinding algorithms to measure against, it was contrasted with Annotated A*. In short, HAA* is an order of magnitude faster and, even with my naive, non-optimised implementation, it was able to find solutions to most problems in ~6ms on average (tested on my Core2 Duo 2.16GHz MacBook, running OSX 10.5.2 with 2GB of memory).
So that about wraps things up. If you have any questions or comments on anything described in this or the previous article, please feel free to contact me. I also suggest checking out the original HAA* paper and associated presentation slides which contain more indepth discussion of these ideas. If you’d like to play with a working implementation, you can download the source code I wrote to evaluate the algorithm. It’s written in C++ and based on the University of Alberta’s freely available pathfinding library, Hierarchical Open Graph (or HOG). HOG compiles on most platforms; I’ve personally tested it on both OSX and Linux and I’m told it works on Windows too. The classes most relevant to this article are probably AnnotatedCluster.cpp and AnnotatedClusterAbstraction.cpp which together are responsible for generating the abstract graph. Meanwhile, AnnotatedHierarchicalAStar.cpp provides a reference implementation of the HAA* algorithm.
]]>This was the topic of a recent paper which I presented at CIG’08. In the talk I outlined Hierarchical Annotated A* (HAA*), a path planner which is able to efficiently address this problem by first analysing the terrain of a game map and then building a much smaller approximate representation that captures the essential topographical features of the original. In this article (the first in a series of two) I’d like to talk about the terrain analysis aspects of HAA* in some detail and outline how one can automatically extract topological information from a map in order to solve the variable-sized multi-terrain pathfinding problem. Next time, I’ll explain how HAA* is able to use this information to build space-efficient abstractions which allow an agent to very quickly find a high quality path through a static environment.
Simply put, a clearance value is a distance-to-obstacle metric which is concerned with the amount of traversable space at a discrete point in the environment. Clearance values are useful because they allow us to quickly determine which areas of a map are traversable by an agent of some arbitrary size. The idea of measuring distances to obstacles in pathfinding applications is not a new one. The Brushfire algorithm for instance is particularly well known to robotics researchers (though for different reasons than those motivating this article). This simple method, which is applicable to grid worlds, proceeds as so:
First, to each traversable tile immediately adjacent to a static obstacle in the environment (for example, a large rock or along the perimeter of a building) assign a value of 1. Next, each traversable neighbour of an assigned tile is itself assigned a value of 2 (unless a smaller value has already been assigned). The process continues in this fashion until all tiles have been considered; Figure 1 highlights this idea; here white tiles are traversable and black tiles represent obstacles (NB: The original algorithm actually assigns tiles adjacent to obstacles a value of 2; I use 1 here because it makes more sense for our purposes).
Brushfire makes use of a minimum obstacle distance metric to compute clearances which works reasonably well in many situations. If we assume our agents (I use the term agent and unit interchangeably) are surrounded by a square bounding box we can immediately use the computed values to identify traversable locations from non-traversable ones by comparing the size of an agent with the corresponding clearance of particular tile. Figure 2 shows the search space for an agent of size 1×1; in this case, any tile with clearance equal to at least 1 is traversable. Similarly, in Figure 3 the search space is shown for an agent of size 2×2. Notice that the bigger agent can occupy much fewer locations on the map; only tiles with clearance values at least equal to 2. Because this approach is so simple it has seen widespread use, even in video games. At GDC 2007 Chris Jurney (from Relic Entertainment) described a pathfinding system for dealing with variable-sized agents in Company of Heroes — which happens to make use of a variant of the Brushfire algorithm.
Unfortunately, clearances computed using minimum obstacle distance do not accurately represent the amount of traversable space at each tile. Consider the example in Figure 4; Here our 2×2 agent incorrectly fails to find a path because all tiles in the bottleneck region are assigned clearance values less than 2. To overcome this limitation we will focus on an alternative obstacle-distance metric: true clearance.
The process of measuring true clearance for a given map tile is very straightforward: Surround each tile with a clearance square (bounding box really) of size 1×1. If the tile is traversable, assign it an inital clearance of 1. Next, expand the square symmetrically down and to the right, incrementing the clearance value each time, until no further expansions are possible. An expansion is successful only if all tiles within the square are traversable. If the clearance square intersects with an obstacle or with the edge of the map the expansion fails and the algorithm selects another tile to process. The algorithm terminates when all tiles on the map have been considered. Figure 5 highlights the process and Figure 6 shows the result on our running example.
Notice that by using true clearance the example from Figure 4 now succeeds in finding a solution. Infact, one can prove that using the true clearance metric it is always possible to find a solution for any agent size if a valid path exists to begin with (i.e. the method is complete; see my paper for the details).
Until now the discussion has been limited to a single type of traversable terrain (white tiles). As it turns out however, it is relatively easy to apply any clearance metric to maps involving arbitrarily many terrain types. Given a map with n terrain types we begin by first identifying the set of possible terrain traversal capabilities an agent may possess. A capability is simply a disjunction of terrains used to specify where each agent can traverse. So, on a map with 2 terrains such as {Ground, Water} the corresponding list of all possible capabilities is given by a set of sets; in this case {{Ground}, {Water}, {Ground, Water}}. Note that, for simplicity, I assume the traveling speed across all terrains is constant (but this constraint is easily lifted).
Next, we calculate and assign a clearance value to each tile for every capability. Figures 7-9 show the corresponding clearance values for each capability on our toy map; notice that we’ve converted some of the black (obstacle) tiles to blue to represent the Water terrain type (which some agents can traverse).
Theoretically this means that, at most, each tile will store 2^(n-1) clearance value annotations (again, see the paper for details). I suspect this overhead can probably be improved with clever use of compression optimisations though I did not attempt more than a naive implementation. Alternatively, if memory is very limited (as is the case with many robot-related applications) one can simply compute true clearance on demand for each tile, thus trading off a little processing speed for more space.
In order to actually find a path for an agent with arbitrary size and capability we can use the venerable A* algorithm, albeit with some minor modifications. First, we must pass two extra parameters to A*’s search function: the agent’s size and capability. Next, we augment the search function slightly so that before a tile is added to A*’s open list we first verify that it is infact traversable for the given size and capability; everything else remains the same. A tile is traversable only if its terrain type appears in the set of terrains that comprise the agent’s capability and if the corresponding clearance value is at least equal to the size of the agent. To illustrate these ideas I’ve put together a simplified pseudocode implementation of the algorithm, Annotated A*:
Function: getPath Parameters: start, goal, size, capability push start onto open list. for each node on the open list if current node is the goal, return path. else, for each neighbour of the newly opened node if neighbour is on the closed list, skip it else, if neighbour is already on the open list, update weights else, if clearance(neighbour, capability) > size, push neighbour on the open list else, skip neighbour push current node on closed list if openlist is null, return failure
So, that’s it! If you’re interested in playing with a working implementation of Annotated A* you can check out the source code I wrote to evaluate it. The code itself is written in C++ and based on the University of Alberta’s freely available pathfinding library, Hierarchical Open Graph (or HOG). HOG compiles on most platforms; I’ve personally tested it on both OSX and Linux and I’m told it works on Windows too. The classes of most interest are probably AnnotatedMapAbstraction, which deals with computing true clearance values for a map, and AnnotatedAStar which is the reference implementation of the search algorithm described here.
In the second part of this article I’m going to expand on the ideas presented here and explain how one can build an efficient hierarchical representation of an annotated map that requires much less memory to store and retains the completeness characteristics of the original. I’ll talk about why this is important and how one can use such a graph to answer pathfinding queries much faster than Annotated A*. At some point soon I also hope to discuss how one can use clearance-based pathfinding on continuous (non-grid) maps.
]]>