Sunday, December 4, 2011

Distributed tournaments for the Google AI Challenge

As I noted a couple of posts ago, I am taking part in the Google AI Challenge again this year (my entry). The challenge this year is Ants, a game which requires entries (agents) to control a number of ants in an environment made up of land, water, food and enemy ants.

The design of my agent is fairly simple and has a large number of parameters that are a adjustable (e.g. distance between an enemy ant and my base that is considered a "threat"). This made it a perfect candidate for trialling out some Genetic Algorithms (GA) theory to tune those parameters, as well as to evalute some algorithmic design decisions.

To start using GA one must generate an initial batch of solutions to the problem. This is currently in the form of 12 versions of my agent.

Once an initial set of solutions has been generated, the next step is the evaluation of the fitness of each solution. Each agent I design is a different "solution" to the problem of being the best agent - the best agent is the fittest.

I decided the simplest way to evaluate the fitness of each agent is for it to compete against other agents that I have made, and sample agents, in the standard game format that is used on the official servers.

As I have a number of laptops and computers, none of which super-powerful, I decided to try and make a distributed tournament system so that I could play as many games as possible to get the best idea of fitness - my setup is as follows.

  • Each machine is running Ubuntu 11.10, with Dropbox installed. The game's Dropbox folder contains the game engine, maps and all the agents that are currently being tested.
    • This allows for new agents to be added at any point and all machines to be immediately updated.
  • Each machine continuously:
    1. Selects a random map
    2. Selects a random set of agents to play on that map
    3. Plays the game
    4. Writes the score to a file based on it's host name - eg "log-ubuntubox.txt". These files are also in the Dropbox folder.
  • Any machine can run a shared script that aggregates the results from all log-*.txt files, computing the average points/game for each agent. This is used as the fitness.
Because I am using Python 2.7 (installed by default on Ubuntu 11.10) for the game engine, agents and extra scripting the provisioning of a new machine is this simple:
  1. Install Ubuntu
  2. Install Dropbox
  3. Run "python play.py"
So far this is working quiet well with quiet dramatic and unexpected performance differences between some nearly identical agents. Once each agent has played at least 30 games I will remove some of the lowest scoring agents and add some new versions based on combining the traits that are the most successful.

With any luck this should result in a pretty competitive entry in this year's Challenge - I will keep you posted!