Shortest Path

July 11, 2011

The Paper

Filed under: conference,non-technical — dharabor @ 16:34

I wanted to share this small piece of creative writing. It was cobbled together earlier this year while I was turning myself inside out trying to meet various submission deadlines. It’s a change of pace for this blog, but I hope you enjoy it nonetheless. I call it, The Paper:

Tick, Tock. Tick, Tock.

Steadily forward march the hands of the clock.
The paper needs writing, results must be got;
Forget about sleep or all is for naught!
With each passing second the conference draws nearer,
To obtain tenure you must publish here.

Tick, Tock. Tick, Tock.

Redouble your efforts ye graduate student,
And condense into pages no greater than six,
The sum of your efforts these many long weeks.
Spell out in sections, as short as can be,
The problem you solve and your neat novelty.
Whether by theory or experimental design,
Argue the case that your approach is sublime!
Cite all prior work and conclude on a high,
Then compile your sources one final time.
Press upload and hope for the best,
In six weeks you find out if your work’s passed the test!

Tick, Tock. Tick, Tock.

With the deadline past and the system now closed,
The program commitee take up their roles.
On secretive forums, in groups of three,
They meet to discuss what will and won’t be.
The schedule is short and there’s lots of bidding,
Each submission receives only a few hours reading.
Each reviewer’s position: your paper’s rejected;
Your job was to prove that it should be accepted!
The standards are high and acceptance rates low,
The hopes of some authors will suffer a blow.
Yet should worst come to worst, you need never fear;
If you don’t make it now, there’s always next year!

Advertisement

October 21, 2010

AIIDE 2010 Conference Summary

Filed under: conference,pathfinding — dharabor @ 12:48

One of the (few) perks of being a research student is attending conferences. There are many reasons to attend a conference: you get to travel to places you probably wouldn’t otherwise visit, toss around ideas with other people who work in the same research area and get a glimpse into the state of the art approaches for problems you might not ever have considered.

Last week I was fortunate enough to attend the 2010 Conference on Artificial Intelligence and Interactive Digital Entertainment (otherwise known as AIIDE). A relative newcomer to the scene (currently in its 6th iteration) AIIDE has developed a bit of a reputation as being the venue at which the games industry and AI academia meet. From a student’s point of view AIIDE is attractive for two reasons:

  1. There is a strong program comittee and rigorous peer-review process (so publishing here looks good on your CV).
  2. It attracts interesting and influential keynote speakers (such as industry icon Peter Molyneux in 2007 and pathfinding guru Paul Tozour in 2009).

This year’s event, held at Stanford University from the 11th to the 13th of October, promised to be no different. On the cards were several interesting keynote speakers, a StarCraft AI competition and an entire day of pathfinding goodness (including my own humble contribution to speeding up search). Was I excited? You bet!

Day 1

The conference kicked off in style with a wonderful keynote from industry veteran Chris Jurney (slides). Known for his work on games such as Company of Heroes and Dawn of War 2, Chris talked about some of the challenges he’s faced implementing efficient AI systems in real-time strategy games. Perhaps the most surprising revelation (for me) was the critical role that pathfinding plays in Chris’ work. For example:

  • The pathfinding system is invoked to answer distance queries that facilitate other AI features such as cover point identification.
  • Pathfinding is used as a mechanism for maintaining squad formations.
  • During combat, units constantly re-evaluate their paths to avoid overlapping with each other and to make sure they don’t overshoot their targets.

Chris also touched on some of the difficulties he encountered when modeling different size units (a problem close to my heart) and finished by outlining an interesting rule-based melee system which was employed in Dawn of War 2. It was a complex dance of 8-10 weight-based heuristic features (such as retaliation, decision inertia and others) which all had to be balanced in order to achieve nicely coreographed battles of up to 100 units at a time. Impressive stuff!

Paper Highlights

Perceptually Realistic Behavior through Alibi Generation
Ben Sunshine-Hill, Norman Badler

One of the first talks of day and also among the most interesting. This paper deals with with the problem of generating intent (or alibis) for unscripted NPCs the player might be observing in large open-world games. The authors have developed a novel system capable of efficiently generating increasingly more detailed plans for NPCs so it looks like they know what they’re doing rather than stupidly wandering around with no purpose.

Learning Companion Behaviors Using Reinforcement Learning in Games
AmirAli Sharifi, Duane Szafron, Richard Zhao

Duane Szafron presented a reinforcement learning approach for developing the behaviour of companion NPCs in story-driven games such as Neverwinter Nights. The main motivation of the work appears to be that party members should react to the player in a manner consistent with how the player has treated them thus far. Pretty neat!

Using Machine Translation to Convert Between Difficulties in Rhythm Games
Kevin Gold, Alexandra Olivier

A very engaging talk from Kevin Gold, this paper outlines an automatic system for analysing note sequences in 1-dimensional rhythm games like Guitar Hero. The aim is to take in the sequence of notes for a an expert difficulty song and automatically generate sequences for the easy, medium and hard difficulties. It’s an interesting problem because, according to Kevin, game designers at Harmonix have to manually encode the different sequences of notes which appear for every song in each of the 4 available difficulties. The authors analysed the song corpus of Guitar Hero 1 and 2 (the notes are apparently stored as midi files on the disc) and developed a novel bayesian method for predicting which notes to drop when converting between difficulties. Perhaps the most remarkable aspect of the work is the accuracy of the reported method: in many cases the generated songs match the official versions over 80% of the time. Nice work guys!

Day 2

A couple of interesting keynotes on the second day:

  • AI researcher Michael Young (from North Carolina State’s Liquid Narrative group) discussed some of the challenges involved in automatically generating narratives. He touched a range of issues including story generation (and the fact that current methods produce scripts which are sound but not necessarily believable), discourse generation strategies and the closely related problem of automated cinematography.
  • Gaming industry veteran Chris Hecker talked about his latest project: SpyParty. This is an interesting game in which one player (the Spy) must complete a set of goals by talking to and interacting with a group of NPCs in a social setting — all while remaining undetected by another player (the Sniper) who is observing the party in order to identify and kill the Spy. It’s a kind of reverse Turing Test where in order to win the Spy must successfully pretend to be a computer. Chris went on to talk about some of the challenges he faced implementing SpyParty, and in particular the difficulties he’s had modeling human-scale interactions with objects (there is apparently not much work in this area). Listening to Chris describe his efforts, it seems that the primary novelty which SpyParty delivers is that it aims to entertain through suspense and the playing of psychological games between the players rather through the usual action or fear mechanics which are so popular in the games industry. It’s a brave and ambitious project in my opinion — the likes of which we see few and far between. Good luck with it Chris!

Another highlight of the day was a Man vs Machine match between one of the best-performing entrants in the StarCraft AI competition and former pro-gamer =DoGo= (a Brood War veteran from the 2001 WCG). The take-home message was clear: AI sucks. The computer opponent was easily crushed in a very one-sided match. Interestingly however, the same AI won more often than not when pitted against the default Blizzard bots; so it’s probably more than match for most casual players.

Paper Highlights

An Offline Planning Approach to Game Plotline Adaptation
Boyang Li, Mark Riedl

Boyang Li discussed an interesting partial-order planning approach for tailoring role-playing game plots to particular player preferences. The idea is to feed an AI planner a set of player preferences (preferred quest types etc), a corpus of quests (made up of tasks, actions, rewards etc) and have it generate a sound and coherent story. It’s a nice idea, particularly as the described system appears to be able to repair existing plots to deal with changing preferences.

Adversarial Navigation Mesh Alteration
D. Hunter Hale, G. Michael Youngblood

This work studies competing groups of units operating in a Capture The Flag environment. Each group of units is able to affect various changes on the world; for example, by constructing obstacles to stop opposing units from reaching their flag or tearing down obstacles to reach an enemy flag (or both). An interesting aspect of the research is that the pathfinding routines of each group of agents are modified to take into account the cost of making changes to the world geometry — so, for example, rather than just computing the cost of going around an obstacle units are also able to compute the cost of going through it. It’s a neat idea because it allows more complex agents to be created without actually adding complex agent design overheads.

Terrain Analysis in Real-time Strategy Games: Choke Point Detection and Region Decomposition
Luke Perkins

Another pathfinding-related paper, this work focuses on decomposing game maps (competition maps from StarCraft in this case) in order to identify choke points. The ultimate objective is to partition the environment into a series of different areas which can be used by higher-level AI routines to (for example) facilitate efficient navigation and coordinate strategic planning. The decompositions I saw looked very promising indeed; the algorithm appears to be very effective at partitioning the map into strategic areas (usually convex in shape and of near-maximal size) which are connected to each other only by transitions at choke-point locations. Nice job Luke!

Day 3

The third day opened with a keynote from Microsoft Researcher Sumit Basu. According to the program, Sumit’s research focuses on making music generation more accessible to non-musicians. I wish I could give a detailed account of the talk but sadly I missed it — I was too busy double and triple checking my slides for later in the day!

I did manage to arrive at the conference in time for the annoucements of the StarCraft AI competition winners however.
28 bots were submitted and four tournaments were held:

  • Tournament 1 (Micromanagement) was won by FreSCBot; an AI from two independent developers which makes use of a multi-agent finite-state machine approach.
  • Tournament 2 (Small-scale combat) was also taken out by FreSCBot.
  • Tournament 3 (Tech-limited game) was won by Luke Perkins’ MimicBot which uses a mirroring strategy to play.
  • Tournament 4 (Full Gameplay) was won by Overmind. This AI, by a team of students at UC Berkeley, used a highly effective mutalisk micromanagement strategy to snare overall victory.

Aside from eternal bragging rights, each winner received a limited edition copy of StarCraft 2 which was signed by the development team at Blizzard — sweet! Well done guys!

Paper Highlights

A Comparison of High-Level Approaches for Speeding Pathfinding
Nathan Sturtevant, Robert Giesberger

This work looked at the efficacy of a number of different approaches for speeding up pathfinding. In his talk Nathan compared the hierarchical pathfinding algorithm used in the recent roleplaying game Dragon Age: Origins with two less known but very fast approaches: Differential Heuristics and Contraction Hierarchies. The former is an efficient memory-based heuristic that reduces pathfinding to a constant number of table lookup operations while the latter is a new technique which speeds up search by adding so called shortcut edges into a graph. The most interesting result, in my opinion, were the reported performance figures for the Contraction Hierarchies method. It appears that, when tuned correctly, this algorithm is several times faster than standard A* search and in some cases even competitive with the suboptimal hierarchical pathfinder. Nice!

DHPA* and SHPA*: Efficient Hierarchical Pathfinding in Dynamic and Static Game Worlds
Alex Kring, Alex J. Champandard, Nick Samarin

This paper, presented by Alex Kring, introduced two variations on the well known HPA* algorithm. Both SHPA* and DHPA* aim to eliminate the insertion overhead associated with HPA* but the latter also makes use of lookup tables to improve the performance of search when traversing individual clusters. Alex went on to discuss the applicability of DHPA* to maps in which the terrain features dynamic obstacles and touched on one of the less studied aspects of pathfinding that game developers often face: parallel processing.

Breaking Path Symmetries in 4-connected Grid Maps (pdf)
Daniel Harabor, Adi Botea

My own submission to this year’s conference is a novel technique for speeding up pathfinding search. Rather than invent a new algorithm to go fast, we instead develop a technique for reducing the size of the search space by eliminating symmetry. Our approach is simple: we decompose an arbitrary grid map into a series of empty rectangular rooms and discard all tiles from the interior of each room — leaving only a boundary perimeter. Finally, we connect each pair of tiles on directly opposite sides of the perimeter. It turns out that this simple strategy can speed up A* (and indeed, any search algorithm) by 3-4 times without any loss of generality. I think the most exciting aspect of this work however is that it is completely orthogonal to all existing pathfinding optimisations. Given how simple and effective the technique can be, I hope that it will be noticed by performance hungry game developers in the near-future.

April 2, 2009

CIG’08 Proceedings Available Online

Filed under: conference — dharabor @ 12:36
Tags:

cig08_logoI noticed recently that the Proceedings for the 2008 IEEE Symposium on Computational Intelligence and Games (CIG’08) are now available online.

There were a number of interesting presentations at the conference, including a few neat pathfinding-related talks (which I discuss in an earlier summary) but the really cool thing is the other stuff the organisers have made available:

All 3 Keynote talks:

Slides from the various tutorials:

  • Learning to Play Games (Simon Lucas)
  • Inducing Agent Models from Examples (Bobby Bryant)
  • Measuring and Optimizing Player Satisfaction (George Yannakakis and Julian Togelius).

And of course copies of all 53 accepted papers.

I’m very impressed that the organisers were kind enough to make all these resources available. In my experience, it’s above and beyond what one could reasonably expect from similar events.

Together with the recent decision open the AAAI digital library to the public, this gives me hope that we’re seeing a move away from subscription-based models for accessing academic literature. Now if only someone could convince those pesky Journal archives to do the same. Elsevier, I’m looking at you!

January 8, 2009

CIG’08 Conference Summary

Filed under: conference — dharabor @ 13:31
Tags: , ,

Last month I was fortunate enough to attend the IEEE Symposium on Computational Intelligence and Games (CIG’08) which was held over 4 days (15-18 December) at the University of Western Australia in sunny Perth.

For those unfamiliar with the event, the CIG series of conferences (now in its 4th iteration) has quickly established itself as a forum for academics and games industry insiders to discuss recent advances in AI and explore directions for future work. Its aims are actually not dissimilar to AIIDE (which was held a few months ago). The main difference between them seems to be that CIG has a rather strong evolution and learning flavour whereas AIIDE is more general; in the academic community the two are regarded about equal.

I was looking forward to this event because one of the keynote speakers is none other than Jonathan Schaeffer. You might have heard Jonathan solved checkers a couple of years ago but he’s also rather well known for his work on pathfinding (he co-authored the HPA* algorithm among other notable achievements).  The other keynote speakers were Penny Sweetser from 2K Australia and Jason Hutchens from Interzone (a local development house based in Perth).

Aside from the academic track, in which over 50 papers were presented, the conference also featured a number of competitions: TORCS Car Racing, Ms. Pac-Man and the 2K Bot Prize. The latter, billed as a Turing Test for bots, garnered quite a bit of interest, due mostly I suspect, to the AUD$10,000 prize on offer for any bot able to fool 4/5 human judges playing Unreal Tournament 2004.

What follows is my summary of the event; my notes are by no means exhaustive so I’ll attempt to focus on the bits I found particularly interesting.

Day 1

The first day of the conference was a warm-up of sorts with a number of tutorials on different aspects of computational intelligence from learning algorithms to opponent modeling and player satisfaction. I think the standout session for me was Simon Lucas’ talk on co-evolution and Temporal Difference Learning. TDL has long been on my never-ending “stuff-to-read-about” list and Simon is a great presenter able to clearly communicate complex ideas in a straightforward manner.

The day ended with a combined poster session and welcoming festivities. There were a number of interesting posters: Julien Kloetzer undertook a comparative study of different algorithms for solving the game of Amazons, Vukosi Marivate looked at evolving board-game agents using social learning and Stefan Wender used reinforcement learning to find city placement strategies in Civilization IV. I’m a big Civ fan so I spent a while talking to Stefan about his approach; they use the Civ scenario editor to train an offline learner that evaluates candidate locations using scoring information from the game — it’s pretty neat!

Day 2

Jonathan Schaeffer kicked off the day with a fantastic keynote on how he and his team at the University of Alberta solved Checkers over an almost 20 year period. Along the way they solved the famous 100 year checkers position and pushed the very limits of computation and storage. According to Jonathan they actually had to stop the search for 4 years during 1997 and 2001 to wait until 64bit precision machines became commonplace. Checkers has 10²⁰ possible positions (a rather large number!) so it’s very impressive that it was solved at all.

Simon Lucas looked at learning rates of evolutionary and TDL algorithms. He observes that co-evolution is very slow and suggests using TDL to bootstrap the learning process.

Pieter Spronck had a paper dealing with units in a group learning formations and moving as a whole (the effect sounds similar to flocking).

Stephen Hladky talked about estimating the positions of opponents in FPS games. His approach relies on a database of game logs (CounterStrike in this case) to figure out the probability of a player moving from one area to another. The approach seems to work well and the idea was, I thought, rather nice.

Mike Preuss discussed the use self organising maps to learn which unit was best against an opposing unit type in an RTS game.

Raphael Stuer looked at using influence maps and flocking to prevent groups of units getting killed while pathfinding through enemy territory. In their implementation, agents path around areas influenced by strong enemy structures (like walls of towers) and plow through easy enemy structures when the group is at an advantage. Very cool!

Day 3

Penny Sweetser from 2K Australia started the day with an interesting keynote on emergence in games in order to improve the player experience. Penny posited that emergence in games should be limited within certain restricted parameters rather than sandbox-style emergence (local rather than global emergence). The talk was actually quite broad in scope, covering alot of ground from emergent unit behaviour to narrative.

Bobby Bryant discussed his progress in trying to evolve a visual controller for FPS shooters. The main idea involves training a genetic algorithm to recognise visual input from a low-resolution “retina” in order to create aiming behaviours for bots.

John Reede had a neat paper about using Modular Neural Nets for agent control in a realtime top-down shooter. John’s controller has two components, one for movement and the other for shooting; each trained independently of the other. The main advantage of MNNs seems to be that they can speed up the learning process significantly and evolve better fitness functions.

There were 3 competitions run during the afternoon:

  • Ms. PacMan (won by Marcus Gallagher)
  • TORCS Car Racing (won by Luigi Cardamone [ed: Thanks Julian!])
  • 2K Bot Prize (won by a team from the Czech Republic)

I spent most of my time at the bot prize chatting to participants and spectators. Winning the major prize of the competition involved building a bot able to play Unreal Tournament such that 4 of the 5 judges are fooled into thinking they are playing a human. Unfortunately the winning entry, AMIS, did not win the major prize (only 2 of the 5 judges were fooled) but there was a minor prize on offer and nice looking trophy.

I took a few photos during the competition to try and capture the flavour:

Day 4

Jason Hutchens from Interzone Games opened the final day with a keynote about pseudo-intelligence as entertainment.  Jason’s talk centred around a major problem of AI game developers: building surprising yet relevant behaviour.  In the talk Jason posited that game AI is a Turing test where players are the judge. Learning and other emergent behaviours make this goal difficult to achieve so AI developers cheat. Jason also discussed some of the challenges faced by game developers, including tight deadlines, camera problems, terrain analysis, dynamic obstacles, spatial awareness and building better tools to debug AI.

One of the interesting papers of the day came from Marcus Gallagher who discussed his approach for winning the Ms. Pac Man competition. Marcus described a simple (but effective) system using influence maps (potential fields really) to drive Pac Man away from ghosts and towards pills.

Mark Wittkamp discussed a learning method for improving group behaviour for ghosts in Pac-Man. The idea was interesting but the overall result a little disappointing; Pac Man only dies 1/6 more often when compared to the original ghost AI from 30 years ago (using a hand-coded Pac-Man bot to train against).

Toward the end of the conference I gave my talk on efficient pathfinding approaches for variable-size units in heterogeneous terrain environments. My paper describes HAA*, a hierarchical path planner that builds a single compact graph to represent complex multi-terrain game maps. I believe this is the first time this problem has been thoroughly studied in the academic literature. I spoke to a few game developers at the conference  who had come across this issue; it seems they solve it by storing different copies of the map: one for each agent size/terrain-traversal capability combination.  I believe HAA* should be much more (space) efficient and I hope to discuss the technique at length in an upcoming post. In the meantime, you can read the paper, take a look at my presentation slides or play around with the source code.

Create a free website or blog at WordPress.com.