As I've mentioned over the last couple of weeks, I've been working on a framework to play Zombology automatically, with AI players making all the decisions. On Friday night, I finally got a basic version of the application working.
The whole point of this is that I need more data about the relative difficulty of the new rules of the game and how well they scale by number of players. I drew a graph a couple of months ago of how I wanted it to look, with the chance of winning in each round being twice as high as the previous round, and the chance of all players losing being 50% (when the game is won there are winners and losers - it's semi-co-operative):
To build a graph that looks like that I need to play at least 128 games with each number of players (3-8), so 768 in total. So far I've managed 33:
While this is looking alright, there's still nowhere near enough data - I've not lost a 7- or 8-player game yet with the new rules (but we've only played it twice each with those numbers of players). Clearly waiting around for me to play it enough with real people is a non-starter.
The AI idea was a way to get more data, but with very little free time (and soon to be even less with Daughter The Second at most 10 days away) I'm not going to be able to come up with a hyper-realistic player AI. So I've opened it up as a competition, which I'll elaborate more on next week.
For the moment, I've started collecting data with an entirely random player - it figuratively rolls a die for every decision it's faced with. I've run the simulation 60,000 times, 10,000 times for each of the numbers of players from three up to eight.
The good news is that Random McRandomface is pretty crap at Zombology. The chances of a single instance of him winning a game varies between 0.2% (in a 3-player game) and 1.5% (in an eight player game). The chances of the game being won varied between 0.5% (3-player again) and 11% (8-player again). Both of these are way south of the 50% I'm aiming for, but I'm very happy that the game is usually lost when played at random. Out of 60,000 games, only one of them was won in round 3 (of 8). Also, if you exclude the loss bar (it's so large you can't see the others!), then the wins by round graph looks pretty good too:
The bad news is the variation by player count. 3-player is hard. 4- and 5-player pretty hard, 6- and 7-player much easier and 8-player twice as easy as 6 & 7. This might disappear once I have a more realistic AI, but it's a potential concern.
My (and your?) next task is to write a more realistic AI...
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment