Aside that space invaders from scratch is not representative for real engineering, it will be interesting to see what the business model for Anthropic will be if I can run a solid code generation model on my local machine (no usage tier per hour or week), let’s say, one year from now. At $200 per month for 2 years I can buy a decent Mx with 64GB (or perhaps even 128GB taking residual value into account)
How come it's "not representative for real engineering"? Other than copy-pasting existing code (which is not what an LLM does), I don't see how you can create a space invaders game without applying "engineering".
> Write an HTML and JavaScript page implementing space invaders
It may not be "copy pasting" but it's generating output as best it can be recreated from its training on looking at Space Invaders source code.
The engineers at Taito that originally developed Space Invaders were not told "make Space Invaders" and then did their best to recall all the source code they've looked at in their life to re-type the source code to an existing game. From a logistics standpoint, where the source code already exists and is accessible, you may as well have copy-pasted it and fudged a few things around.
The source code for original Space Invaders from 1978 has never been published. The closest to that is disassembled ROMs.
I used that prompt because it's the shortest possible prompt that tells the model to build a game with a specific set of features. If I wanted to build a custom game I would have had to write a prompt that was many paragraphs longer than that.
The aim of this piece isn't "OMG looks LLMs can build space invaders" - at this point that shouldn't be a surprise to anyone. What's interesting is that my laptop can run a model that is capable of that now.
The discussion I replied to was just regarding whether or not what the LLM did should be considered "engineering"
It doesn't really matter whether or not the original code was published. In fact that original source code on its own probably wouldn't be that useful, since I imagine it wouldn't have tipped the weights enough to be "recallable" from the model, not to mention it was tasked with implementing it in web technologies.
Making a space invaders game is not representative of normal engineering because you're reproducing an existing game with well known specs and requirements. There are probably hundreds of thousands of words describing and discussing Space Invaders in GLM-4.5's training data
It's like using an LLM to implement a red black tree. Red black trees are in the training data, so you don't need to explain or describe what you mean beyond naming it.
"Real engineering" with LLMs usually requires a bunch of up front work creating specs and outlines and unit tests. "Context engineering"
>> Other than copy-pasting existing code (which is not what an LLM does)
I'd like to see someone try to prove this. How many space invaders projects exist on the internet? I'd be hard to compare model "generated" code to everything out there looking for plagiarism, but I bet there are lots of snippets pulled in. These things are NOT smart, they are huge and articulate information repositories.
Based on my mental model of how these things work I'll be genuinely surprised if you can find even a few lines of code duplicated from one of those projects into the code that GLM-4.5 wrote for me.
That's not an example of copying from an existing Space Invaders implementation. That's an LLM using a CSS animation pattern - one that it's seen thousands (probably millions) of times in the training data.
That's what I expect these things to do: they break down Space Invaders into the components they need to build, then mix and match thousands of different coding patterns (like "animation: glow 2s ease-in-out infinite;") to implement different aspects of that game.
That code certainly looks similar, but I have trouble imagining how else you would implement very basic collision detection between a projectile and a player object in a game of this nature.
A human would likely have refactored the two collision checks between bullet/enemy and enemyBullet/player in the JavaScript code into its own function, perhaps something like "areRectanglesOverlapping". The C++ code only does one collision check like that, so it has not been refactored there, but as a human, I certainly would not want to write that twice.
More importantly, it is not just the collision check that is similar. Almost the entire sequence of operations is identical on a higher level:
1. enemyBullet/player collision check
2. same comment "// Player hit!" (this is how I found the code)
3. remove enemy bullet from array
4. decrement lives
5. update lives UI
6. (createParticle only exists in JS code)
7. if lives are <= 0, gameOver
> find even a few lines of code duplicated from one of those projects
I'm pretty sure they meant multiple lines copied verbatim from a single project implementing space invaders, rather than individual lines copied (or likely just accidentally identical) across different unrelated projects.
Sorites paradox. Where's the distinction between "snippet" and "a design pattern"?
Compressing a few petabytes into a few gigabytes requires that they can't be like this about all of the things they're accused of simply copy-pasting, from code to newspaper articles to novels. There's not enough space.
" it will be interesting to see what the business model for Anthropic will be if I can run a solid code generation model on my local machine "
Most people won't bother with buying powerful hardware for this, they will keep using SAAS solutions, so Anthropic can be in trouble if cheaper SAAS solutions come out.
I’ve been mentally mapping tge models to the history of db.
Most db in the early days you had to pay for. There are still for pay db that are just better than ones you don’t pay for. Some teams think that the cost is worth the improvements and there is a (tough) business there. Fortunes were made in the early days.
But eventually open source models became good enough for many use cases and they have their own advantages. So lots of teams use them.
I think coding models might have a similar trajectory.
You make a good point -- a majority of applications are now using open source or free versions[1] of DBs.
My only feedback is: are these the same animal? Can we compare an O/S DB vs. paid/closed DB to me running an LLM locally? The biggest issue right now with LLMs is simply the cost of the hardware to run one locally, not the quality of the actual software (the model).
[1] e.g. SQL Server Express is good enough for a lot of tasks, and I guess would be roughly equivalent to the upcoming open versions of GPT vs. the frontier version.
A majority of apps nowadays are using proprietary forks of open source DBs running in the cloud, where their feature set is (slightly) rounded out and smoothed off by the cloud vendors.
Not that many projects are doing fully self-hosted RDBMS at this point. So ultimately proprietary databases still win out, they just (ab)use the Postgresql trademark to make people think they're using open source.
LLMs might go the same way. The big clouds offering proprietary fine tunes of models given away by AI labs using investor money?
That's definitely true. I could see more of the running open source models on other people's hardware model.
I dislike running local LLMs right now because I find the software kinda janky still, you often have to tweak settings, find the right model files. Basically have a bunch of domain knowledge I don't have space for in my head. On top of maintaining a high-spec piece of hardware and paying for the power costs.
Closed doesn't always win over open. People said the same thing about Windows vs Linux, but even Microsoft was forced to admit defeat and support Linux.
All it takes is some large companies commoditizing their complements. For Linux it was Google, etc. For AI it's Meta and China.
The only thing keeping Anthropic in business is geopolitics. If China were allowed full access to GPUs, they would probably die.
> The only thing keeping Anthropic in business is geopolitics. If China were allowed full access to GPUs, they would probably die.
Disagree. Anthropic have a unique approach to how they post-train their models and tune it to be the way they want it. No other lab has managed to reproduce the style and personality of Claude yet, which is currently a key reason why coders prefer it. And since post-training data is secret, it'll take other providers a lot of focused effort to get close to that.
Latency and tooling support ? UX of cloud based LLM vs local is much better for the cloud option - not so much for dev tooling.
I tried using remote workstations - I am not a fan of lugging a beefy client machine to do my work - would much rather use something thats super light and power efficient.