The design of my agent is fairly simple and has a large number of parameters that are a adjustable (e.g. distance between an enemy ant and my base that is considered a "threat"). This made it a perfect candidate for trialling out some Genetic Algorithms (GA) theory to tune those parameters, as well as to evalute some algorithmic design decisions.
To start using GA one must generate an initial batch of solutions to the problem. This is currently in the form of 12 versions of my agent.
Once an initial set of solutions has been generated, the next step is the evaluation of the fitness of each solution. Each agent I design is a different "solution" to the problem of being the best agent - the best agent is the fittest.
I decided the simplest way to evaluate the fitness of each agent is for it to compete against other agents that I have made, and sample agents, in the standard game format that is used on the official servers.
As I have a number of laptops and computers, none of which super-powerful, I decided to try and make a distributed tournament system so that I could play as many games as possible to get the best idea of fitness - my setup is as follows.
- Each machine is running Ubuntu 11.10, with Dropbox installed. The game's Dropbox folder contains the game engine, maps and all the agents that are currently being tested.
- This allows for new agents to be added at any point and all machines to be immediately updated.
- Each machine continuously:
- Selects a random map
- Selects a random set of agents to play on that map
- Plays the game
- Writes the score to a file based on it's host name - eg "log-ubuntubox.txt". These files are also in the Dropbox folder.
- Install Ubuntu
- Install Dropbox
- Run "python play.py"
With any luck this should result in a pretty competitive entry in this year's Challenge - I will keep you posted!