In little bits of free time I get here and there, I've been working on using reinforcement learning to build some better bots for my favorite multiplayer game. Project is up here: https://github.com/cgadski/autotude
So far all my work has gone into the technical side of setting up the game (a Java app written in 2010) to work as a reinforcement learning environment. The developers were nice enough to maintain the source and open it to the community, so I patched the client/server to be controllable through protobuf messages. So far, I can:
- Record games between humans. I also wrote a kind of janky replay viewer [1] that probably only makes sense to people who play the game already. (Before, the game didn't have any recording feature.)
- Define bots with pytorch/python and run them in offline training mode. (The game runs relatively quickly, like 8 gameplay minutes / realtime second.)
- Run my python-defined bots online versus human players. (Just managed to get this working today.)
It took a bunch of messing around with the Java source to get this far, and I haven't even really started on the reinforcement learning part yet. Hopefully I can start on that soon.
This game (https://planeball.com) is really unique, and I'm excited to produce a reinforcement learning environment that other people can play with easily. Thinking about how you might build bots for this game was one of the problems that made me interested in artificial intelligence 8 years ago. The controls/mechanics are pretty simple and it's relatively easy to make bots that beat new players---basically just don't crash into obstacles, don't stall out, conserve your energy, and shoot when you will deal damage---but good human players do a lot of complicated intuitive decision-making.
[1] http://altistats.com/viewer/?f=4b020f28-af0b-4aa0-96be-a73f0... (Press h for help on controls. Planes will "jump around" when they're not close to the objective---the server sends limited information on planes that are outside the field of vision of the client, but my recording viewer displays the whole map.)
So far all my work has gone into the technical side of setting up the game (a Java app written in 2010) to work as a reinforcement learning environment. The developers were nice enough to maintain the source and open it to the community, so I patched the client/server to be controllable through protobuf messages. So far, I can:
- Record games between humans. I also wrote a kind of janky replay viewer [1] that probably only makes sense to people who play the game already. (Before, the game didn't have any recording feature.)
- Define bots with pytorch/python and run them in offline training mode. (The game runs relatively quickly, like 8 gameplay minutes / realtime second.)
- Run my python-defined bots online versus human players. (Just managed to get this working today.)
It took a bunch of messing around with the Java source to get this far, and I haven't even really started on the reinforcement learning part yet. Hopefully I can start on that soon.
This game (https://planeball.com) is really unique, and I'm excited to produce a reinforcement learning environment that other people can play with easily. Thinking about how you might build bots for this game was one of the problems that made me interested in artificial intelligence 8 years ago. The controls/mechanics are pretty simple and it's relatively easy to make bots that beat new players---basically just don't crash into obstacles, don't stall out, conserve your energy, and shoot when you will deal damage---but good human players do a lot of complicated intuitive decision-making.
[1] http://altistats.com/viewer/?f=4b020f28-af0b-4aa0-96be-a73f0... (Press h for help on controls. Planes will "jump around" when they're not close to the objective---the server sends limited information on planes that are outside the field of vision of the client, but my recording viewer displays the whole map.)