Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Robocode Tank Royale (robocode-dev.github.io)
204 points by gioazzi on June 5, 2022 | hide | past | favorite | 52 comments


I remember this from years back.

A friend of mine came up with a fantastic strategy based around the fact that shot strength was a float that could be modified.

He used this to have his team fire extremely weak shots at each other, with the value of the float encoding different messages. He then used these to co-ordinate his bots against the enemy as a team, instead of individuals.


That's a bit like the protocol some smart lightbulbs use to connect to Wi-Fi networks, which is simultaneously genius and depressingly dystopian.

TIL that Wi-Fi encryption is vaguely akin to a VPN, in that it encrypts at the TCP/UDP/etc level, not the IP level, and TIL also that Wi-Fi encryption (which is a stream cipher) does not alter packet lengths.

Soooo, when you "pair" such a lightbulb using this protocol, the companion app is actually just spraying empty packets with specific packet lengths to 255.255.255.255, and the lightbulb's ESP8266 (or similar) is sitting in monitor mode and circularly-logging the lengths of the packets it sees into a buffer in memory until it sees a valid setup message.

If you presume the setup message is protected by some sort of key you'd be sorely mistaken :D your Wi-Fi SSID and PSK are available in plaintext to anything and everything that happens to be in monitor mode at the time. Like I said, genius yet very very hard to not see as deliberately chaotic-evil.


That is false, layers 3 and up are encrypted with WiFi, and VPNs as well.


What stops this kind of coordination from being used to smuggle an arbitrary massively parallel workload onto the system? :o)


What stops you from using discord messages as a database? :p I really enjoy this type of stuff though, is there a term for it?

A while back after skype got rid of their sdk I was annoyed that the app didn't unfurl gifs. So I reverse engineered their api for sending/updating messages and I realized you could update a message that contained a file. The only problem was the file had to be on their servers or a client wouldn't display it. So I reverse engineered the ios app as well to find a way to get a file on their servers (I don't recall why I couldn't do it from the web api). After that I had two things: a way to get files on their servers, and an api to update messages. So I took each frame of a gif, uploaded to their servers, and then used the api to update the message at the rate of the gif.

Of course there will be people making discord bots out there laughing at how simple this sounds, but the combination of reverse engineering as well as applying that to affect a new feature in the app I thought was pretty novel.


> What stops you from using discord messages as a database? :p I really enjoy this type of stuff though, is there a term for it?

"Our tech stack is Clojure hosted on the JVM of massively distributed workers running opportunistically on Robocode Tank Royale with homebrew float-encoded protobuf message passing, and using NoSQL Discord messages as a backend."


You say it's an ugly hack, but our +0.8% profit margins speak for themselves!


> What stops you from using discord messages as a database? :p I really enjoy this type of stuff though, is there a term for it?

Potential SIGBOVIK papers? But more seriously, for communications, it would be "covert channels". I met someone at NCSU who was working on something with bots communicating through WoW trade offers (which would be unlikely to be logged, unlike trades or chat messages).


Brilliant. I thought about game of life that had such simple rules but amazing emergent behaviour. No surprise that something as sophisticated as tank wars has strategies never anticipated by the original creators.


Someone at work once made a Robocode tournament, and we spent a day paid playing with it.

Most people went for an "if-based" strategy. If close to wall turn, if pointing towards enemy shoot, etc. And when that works apply some more rules and geometric calculations.

I did mine trying using neural nets. One net that would learn the movement patterns of opponents (offline) and predict positions and shoot there (since bullets have move time). That actually worked fairly well with a really simple net.

For movement, however, deep-RL etc wasn't that big yet, so I struggled a bit. What to train on? So I used genetic algorithms to train this part of the net. Not the best results, but it worked. Mind you these were simple homemade nets, not sure deep learning even was a concept then.

Hard to write a good fitness function. I tried to deduct the fitness if it hit a wall. Then it would just stand still. Tried to then increase fitness based on movement, then it would just oscillate at one spot. Etc. Etc. GAs exploit all loopholes, pretty interesting.

Great fun. Great tool to explore programming at all levels.


If you were going to use genetic algorithms, you should have gone all in- seed with a randomized neural network, have them compete, and breed the winners.

Literally survival of the fittest!


Hehe, yeah, I did something similar later, using game score as the fitness score (so still a chance of breeding even though you didn't win) in a kind of tournament selection. The problem is each generation then takes loads of time to simulate, compared to just observing their behavior for a short time.

But that worked much better as a fitness score, yes. It's very true that what you measure is what will be optimized for. Both for employees and robots.


I had a professor who said that the first thing a GA discovers is that you didn't describe the problem carefully enough.


At my alma mater we, the students, used to organize a one week "Introduction To Programming" course based on Robocode. It was meant to teach freshmen the very basics of Java before they'd have their first classes.

They'd have four days to come up with strategies and develop their robots and on Friday there was a tournament between all the developed robots. The best teams got a non-trivial prize (and bragging rights).

It was really interesting to see which strategies the students came up with and how they changed from year to year. The amount of material provided really made a much larger difference than one might expect.

Among the interesting challenges of Robocode is that the bullets you shoot are so slow that your opponent has ample time to dodge, so you need to predict their movement. On the other hand you can't see where your opponent is shooting so you need to guess and move elsewhere. It's really quite fun.


At Baylor ~20 years ago we had to compete with other students in a required freshmen/sophomore comp sci class.


That's amazing! Competitions in courses have always motivated me to go above and beyond, even though they were never graded.

I'm assuming you had a longer time frame than one week? I realize this is a long shot, but do you happen to remember what strategies performed best when you took that course?


I remember playing https://en.wikipedia.org/wiki/ChipWits on an original 128kb Mac at school very, very fondly. One of the best programming games I ever had the pleasure of playing with.


They’re trying to reboot it

https://playchipwits.com/


That looks really cool. I wonder how I missed it.


This reminds me of Omega [1] from 1989. One of the more fun games I've played. I was thinking about giving it a go again and seeing how well I do.

[1] - https://en.wikipedia.org/wiki/Omega_(video_game) - https://corewar.co.uk/omega/files/misc/omegamanual.pdf


I was always impressed with Robocode coders. But I always wished that the team meta somehow developed.

One tank is hard enough, but team-games where radar-free drones have more HP seem to provide interesting strategies.

The Radar system is very smart in this game, at most covering 22.5 degrees per tick. Converting one enemy is easy enough, but getting a good view of the battlefield as a whole seems like a different game.

I guess singlebot games and 1v1 is easier to test and make AIs for, and was more fun for larger sets of the community.


Maybe checkout gladiabots[1]? I have no affiliation with them, but I did play around with it for a few weeks a couple of years ago. It's not really traditional programming, but I think it is quite fun coming up with strategies.

[1] https://store.steampowered.com/app/871930/GLADIABOTS__AI_Com...


Depending too heavily on the radar free drones is risky, because they are blind if the radar drone is destroyed. Also the enemy can tell by your starting HP which ones have radar or are drones, so this tactic would be easy exploit.


Risky yes, but "wave surfing" bots can keep track of when the enemy takes a shot, how much energy was used in the shot (0.1 bullets travel faster than 3 energy bullets), and calculate where drones need to position themselves to protect the radar drones. (Actually, good bots use wave-tracking on the offense: to collect statistics on how the *enemy moves* in relation to your own shots, and use those statistics to try and predict the enemy's movements)

"wave surfing" is necessary to beat AI-guns (statistical learning) to ensure your bots are "flattening the gun curve", to minimize the gradient, minimizing any system (be it statistical or neural-net) from learning your patterns. Extending the wave-surfing principle to a team-setting means using drones to purposefully protect your radar bots.

---------

Advanced single-player 1v1 dueling bots can already shoot 0.1 bullets to "protect themselves" (the 0.1 bullet travels to "block" out any bullets that are on their way to the planned destination), by using this principle. Stationing a "shield drone" to perform the job would be much much simpler.

---------

https://robowiki.net/wiki/Wave_Surfing

https://robowiki.net/wiki/Waves

All you gotta do is:

1. Detect the wave as the enemy shoots (Bots can't see bullets, but they can see the loss of energy from 0.1 to 3, indicating a fired shot)

2. Plan out where your bot is going to move when that wave intersects your bot.

3. Shoot a "protection bullet" along that wave, blocking any enemy bullets that would intersect with your planned location on the wave.

Alternatively, #3 could be "station your high-HP radar-free drone" to be in that area instead.

Case #1: Your enemy predicts where you are. In this case, the drone (or protection bullet) will cancel out the enemy bullet.

Case #2: Your enemy fails to predict where you are. In this case, the drone / radar-bot / 0.1 bullet will missed entirely. Since 0.1 is the minimum energy spent per shot, you will have more energy than the opponent by the end of the game, and you can kill them once they're unable to fire.

------

It seems that with a good enough radar-network (such that you have 100% vision of the entirety of the enemy forces), you could accurately track all waves. Of course, enemy forces could move in such a way to try to escape your radar vision. Etc. etc.

In any case, the "meta" for the team-game is unknown and untested. There's questions like "how many radars are needed to effectively fight (probably more than 0, probably less than an all-RADAR team).

---------

Because 1v1 duel-bots are so good at tracking waves, the multiplayer meta almost certainly will be about finding "nodal points" and coordinating fire, so that enemies cannot avoid the mass-fire from these droid-bots.

Similarly, moving in such a way that you position your bots far away enough to minimize overlapping vulnerable zones is another point.

Alas, most "team games" are just an off the shelf Melee-bot x5 with very basic "friendly-fire" awareness. I think part of the reason is that the default rulesets (the size of maps and whatnot) are insufficient to develop a fun meta. The Robocode community must first find a good set of rules (map size, in particular) that will create a fun environment to experiment a variety of different strategies.


You're referring to 'bullet shadows', this is pretty well known in the Robocode community. It can work well, (all the top bots do in 1v1) and even more so in the team context where you have more perpendicular angles to increase the shadow size, but in the heat of battle with multiple bots it is very difficult to correctly track every energy drop. The timing of detecting the energy drop is very sensitive, and if you only have one bot with radar then you'll definitely be missing exact timestamps.

Even in Melee context there are only a few bots (top 3 or 4 I think) that even attempt wavesurfing, and the accuracy is far from 100%. In Neuromancer, for example, it keeps earliest and latest timestamps for energy drops to try to account for this, but this wouldn't be anything close to enough for making bullet shadows to get a high amount of coverage.

In my experience here, the 20% HP bonus isn't worth the impact on your strategy and reduced timestamp accuracy of radar events. Allowing every team member be able to take on any role depending on the field layout is much more valuable.


I loved Robocode, it was one of my first introductions to programming, as we used it in my high school programming class. I've sort of had a pipe dream for awhile to build a browser based Robocode inspired game. Not an exact clone, but a similar idea of programming bots to fight each other.



Thank you! I've seen some of those, but some are new to me. Lots of cool stuff.


Maybe we can use it as a self-improvement exercise:

Carve out a day every year to write a brand new bot (no reuse) to see how it fares against older bots. Try to outsmart ourselves.


Oh man, I spent hours and hours on a similar game in the 90s: RoboWar. I learned so much playing that game.

https://en.wikipedia.org/wiki/RoboWar


See also BattleCode https://www.youtube.com/watch?v=x3a5dXaj-XA

A programming competition at MIT, it looks like they change the engine out every couple years. When I first encountered it, it was a starcraft style game. Now it looks like agents competing in a resource constrained environment.

https://www.youtube.com/watch?v=X5d00wtBX3k


The resource constrained aspect is something I find fascinating, because it levels the playing field and incentivizes efficiency. I'm trying to get a (very) resource limited chess competition going: https://rlc-chess.com/


I love that you are using WebAssembly!

For the rest of the thread, Wasm allows the rlc chess engine to limit the amount of instructions executed or memory allocated while supporting lots of different languages, so no language has preferential treatment. The strong sandboxing of Wasm allows anyone to run anyone elses code without worrying about an adversarial attack, either accidental or on purpose.


Alternative? - Rocket Bot Royale - https://rocketbotroyale.winterpixel.io


Thanks. It’s a good game, but I don’t see a programming aspect to it.


I have a little self-written version I use as an assignment for Scala students at UNE (Aus).

At the moment, using classic actors but it'll probably shift to typed actors next time around.

https://github.com/UNEcosc250/2018assignment3

I also put in a bit about "tankfighting with insults" a la Monkey Island, to try to give a little exercise in streams.

(Relatively safe to link because I'll be updating it next year anyway)

Some years ago, a colleague and I ran a software studio course at UQ where we used the original Robocode codebase as the starter project, and had teams adding action-replay, Call of Duty style killstreak rewards and all sorts of other odd features.

(Though the pain of the original Robocode Java codebase was there was a 1,000 line long class that was so central to everthing that by the time students were done with it, it was a 3,000 line class. Our hopes of "prime target for students to refactor" were thwarted by "turns out, students don't do that".)


It's not clear from the website's home page what is the difference with the original version hosted at sourceforge. This page explains it: https://robocode-dev.github.io/tank-royale/articles/tank-roy...


I have won an unofficial competition in the university back in the 1990s, we competed using something called PascalRobots. The game was played via rounds of 1v1 matches. My victory came thanks to a simple trick: I noticed that certain algorithms had good success against most algorithms, but were very vulnerable against few others, which, in turn, were very vulnerable to most other ones. Basically a rock-paper-scissors problem. So I did a very simple thing: I've programmed a robot that used first 50% of health using one algorithm, and then switched to another very different one. The second one was simply running along the edges semi-randomly and scanning and shooting backwards - i remember the rules were that every action like scanning took some time so you couldn't build a robot who'd pinpoint the exact location of the enemy in the allotted time, you'd waste too many cycles on that.


See also with a similar concept: RoboForge[0][1] (2001), where you designed and programmed a fighting robot.

Roboforge also did another thing that I still - even two decades later in this microtransaction-laden world - haven't seen again: Paid tournaments!

You could enter your bot into online tournaments for free, or you could pay something like $5 to enter a paid tournament, and if you won you'd win a real-money prize. There's probably some huge legal issue around what pretty much amounts to gambling, but it was brilliant.

[0]http://www.roboforge.altervista.org/ [1]https://en.wikipedia.org/wiki/Roboforge


I have fond memories of Color Robot Battle from my childhood. Setting up two robots and heading off to school while they spend hours fighting.

https://corewar.co.uk/colorrobotbattle.htm


Wow that's cool. I didn't know that existed. Similar to RobotWar. Doesn't seem to have a cool IDE and source level debugger like RobotWar. Looks very fun though!


At the university I got into AI, NN, evolutionary algorithms. I wanted to develop the skill using these so as a laboratory work I choose building a Robocode bot. The idea was to learn how to evolve it rather hardcoding. This video was quite an inspiration: https://youtu.be/Hp6bhARBGc4

I used a Java GA lib called Watchmaker and trained my bot against built-in bots AND against a world no.1 bot of previous years.

At that time I could barely code cleanly and I was mostly fighting the frameworks so there was a point where evolution in 3 dimensions (acceleration, bodyTurn, turretTurn) were implemented, but maybe one of the most important one, shooting was not.

I run out of time, next day I had to give a presentation so I just gave it a go. Surprisingly the bot evolved to a point (in minutes) where it beat the ex world champ bot. First its performance went above 50% which was a shock, but later it got to 60%+ sometimes even 80%. But how? How to beat without a single shot?

Turns out evolution and random can find you things which are so anti intuitive you could basically never come up with. ( I had a sense of that, that’s why I choose that domain ;D )

What I have seen is that my bot evolved a really simple but weird technique to dodge the champs bullets: it found some kind of frequency and on that frequency it has just… acceleration up and down (negative velocity) and HIT THE WALL AGAIN AND AGAIN. Wat?

The champ bot’s prediction was just too good but not prepared for such a stupid enemy like my bot and it was basically impossible to predict that my bot will slow down for V to 0 in 0 sec so champ was almsot constantly shooting ahead or behind my bot, shooting the wall instead my bot. If the wall wasn’t there, it would hit almost 100%.

Shooting requires energy and after a given time both bots lose energy at the same rate. My bot could win just by “backing out” and not being hit too often, wasting the enemies energy.

Since that experience I could not care less about NN (except neuroevolution), evolutionary algorithms ftw! :D

(I hope no one will create the AI killing us all thanks to my comment. :D )


NN and evolutionary are not antithetical. There are many ways to update a neural network. Maybe you mean "gradient descent" vs "genetic"?


Reminds me of sc2ai https://sc2ai.net/ where you can write bots for starcraft2 and let them battle it out vs other bots.



Very cool. Note that RobotWar was developed first on the Plato system circa 1970s. I don't know the exact year. Loved playing that as a kid on the Apple II


I'd known that at one point and forgotten it. Thanks.


Brings back a lot of good highschool memories :)


Wow, what a throwback! This game is the reason I got into programming. Thank you for reminding me that this exists!


You can program a tank, or team of tanks, to fight each other.

Each tank can move, scan other tanks and shoot.

Your tank has hit points and shooting a bullet decreases your hit points by the amount of firepower you used.

Bullets are particles that move over time, so you have to shoot not at where other tanks are but where they are going to be at based on their last position seen on radar.


"Please notice that Robocode contains no gore, no blood, no people, and no politics. The battles are simply for the excitement of the competition that we love so much." if only everything could be so pure.


Do games like this help improve game development skills?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: