Shortest Path

January 8, 2009

CIG’08 Conference Summary

Filed under: conference — dharabor @ 13:31
Tags: , ,

Last month I was fortunate enough to attend the IEEE Symposium on Computational Intelligence and Games (CIG’08) which was held over 4 days (15-18 December) at the University of Western Australia in sunny Perth.

For those unfamiliar with the event, the CIG series of conferences (now in its 4th iteration) has quickly established itself as a forum for academics and games industry insiders to discuss recent advances in AI and explore directions for future work. Its aims are actually not dissimilar to AIIDE (which was held a few months ago). The main difference between them seems to be that CIG has a rather strong evolution and learning flavour whereas AIIDE is more general; in the academic community the two are regarded about equal.

I was looking forward to this event because one of the keynote speakers is none other than Jonathan Schaeffer. You might have heard Jonathan solved checkers a couple of years ago but he’s also rather well known for his work on pathfinding (he co-authored the HPA* algorithm among other notable achievements).  The other keynote speakers were Penny Sweetser from 2K Australia and Jason Hutchens from Interzone (a local development house based in Perth).

Aside from the academic track, in which over 50 papers were presented, the conference also featured a number of competitions: TORCS Car Racing, Ms. Pac-Man and the 2K Bot Prize. The latter, billed as a Turing Test for bots, garnered quite a bit of interest, due mostly I suspect, to the AUD$10,000 prize on offer for any bot able to fool 4/5 human judges playing Unreal Tournament 2004.

What follows is my summary of the event; my notes are by no means exhaustive so I’ll attempt to focus on the bits I found particularly interesting.

Day 1

The first day of the conference was a warm-up of sorts with a number of tutorials on different aspects of computational intelligence from learning algorithms to opponent modeling and player satisfaction. I think the standout session for me was Simon Lucas’ talk on co-evolution and Temporal Difference Learning. TDL has long been on my never-ending “stuff-to-read-about” list and Simon is a great presenter able to clearly communicate complex ideas in a straightforward manner.

The day ended with a combined poster session and welcoming festivities. There were a number of interesting posters: Julien Kloetzer undertook a comparative study of different algorithms for solving the game of Amazons, Vukosi Marivate looked at evolving board-game agents using social learning and Stefan Wender used reinforcement learning to find city placement strategies in Civilization IV. I’m a big Civ fan so I spent a while talking to Stefan about his approach; they use the Civ scenario editor to train an offline learner that evaluates candidate locations using scoring information from the game — it’s pretty neat!

Day 2

Jonathan Schaeffer kicked off the day with a fantastic keynote on how he and his team at the University of Alberta solved Checkers over an almost 20 year period. Along the way they solved the famous 100 year checkers position and pushed the very limits of computation and storage. According to Jonathan they actually had to stop the search for 4 years during 1997 and 2001 to wait until 64bit precision machines became commonplace. Checkers has 10²⁰ possible positions (a rather large number!) so it’s very impressive that it was solved at all.

Simon Lucas looked at learning rates of evolutionary and TDL algorithms. He observes that co-evolution is very slow and suggests using TDL to bootstrap the learning process.

Pieter Spronck had a paper dealing with units in a group learning formations and moving as a whole (the effect sounds similar to flocking).

Stephen Hladky talked about estimating the positions of opponents in FPS games. His approach relies on a database of game logs (CounterStrike in this case) to figure out the probability of a player moving from one area to another. The approach seems to work well and the idea was, I thought, rather nice.

Mike Preuss discussed the use self organising maps to learn which unit was best against an opposing unit type in an RTS game.

Raphael Stuer looked at using influence maps and flocking to prevent groups of units getting killed while pathfinding through enemy territory. In their implementation, agents path around areas influenced by strong enemy structures (like walls of towers) and plow through easy enemy structures when the group is at an advantage. Very cool!

Day 3

Penny Sweetser from 2K Australia started the day with an interesting keynote on emergence in games in order to improve the player experience. Penny posited that emergence in games should be limited within certain restricted parameters rather than sandbox-style emergence (local rather than global emergence). The talk was actually quite broad in scope, covering alot of ground from emergent unit behaviour to narrative.

Bobby Bryant discussed his progress in trying to evolve a visual controller for FPS shooters. The main idea involves training a genetic algorithm to recognise visual input from a low-resolution “retina” in order to create aiming behaviours for bots.

John Reede had a neat paper about using Modular Neural Nets for agent control in a realtime top-down shooter. John’s controller has two components, one for movement and the other for shooting; each trained independently of the other. The main advantage of MNNs seems to be that they can speed up the learning process significantly and evolve better fitness functions.

There were 3 competitions run during the afternoon:

  • Ms. PacMan (won by Marcus Gallagher)
  • TORCS Car Racing (won by Luigi Cardamone [ed: Thanks Julian!])
  • 2K Bot Prize (won by a team from the Czech Republic)

I spent most of my time at the bot prize chatting to participants and spectators. Winning the major prize of the competition involved building a bot able to play Unreal Tournament such that 4 of the 5 judges are fooled into thinking they are playing a human. Unfortunately the winning entry, AMIS, did not win the major prize (only 2 of the 5 judges were fooled) but there was a minor prize on offer and nice looking trophy.

I took a few photos during the competition to try and capture the flavour:

Day 4

Jason Hutchens from Interzone Games opened the final day with a keynote about pseudo-intelligence as entertainment.  Jason’s talk centred around a major problem of AI game developers: building surprising yet relevant behaviour.  In the talk Jason posited that game AI is a Turing test where players are the judge. Learning and other emergent behaviours make this goal difficult to achieve so AI developers cheat. Jason also discussed some of the challenges faced by game developers, including tight deadlines, camera problems, terrain analysis, dynamic obstacles, spatial awareness and building better tools to debug AI.

One of the interesting papers of the day came from Marcus Gallagher who discussed his approach for winning the Ms. Pac Man competition. Marcus described a simple (but effective) system using influence maps (potential fields really) to drive Pac Man away from ghosts and towards pills.

Mark Wittkamp discussed a learning method for improving group behaviour for ghosts in Pac-Man. The idea was interesting but the overall result a little disappointing; Pac Man only dies 1/6 more often when compared to the original ghost AI from 30 years ago (using a hand-coded Pac-Man bot to train against).

Toward the end of the conference I gave my talk on efficient pathfinding approaches for variable-size units in heterogeneous terrain environments. My paper describes HAA*, a hierarchical path planner that builds a single compact graph to represent complex multi-terrain game maps. I believe this is the first time this problem has been thoroughly studied in the academic literature. I spoke to a few game developers at the conference  who had come across this issue; it seems they solve it by storing different copies of the map: one for each agent size/terrain-traversal capability combination.  I believe HAA* should be much more (space) efficient and I hope to discuss the technique at length in an upcoming post. In the meantime, you can read the paper, take a look at my presentation slides or play around with the source code.

Blog at WordPress.com.