Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Typed languages are better suited for vibecoding (solmaz.io)
273 points by hosolmaz 2 days ago | hide | past | favorite | 226 comments




> I am managing projects in languages I am not fluent in—TypeScript, Rust and Go—and seem to be doing pretty well.

This framing reminds me of the classic problem in media literacy: people know when a journalistic source is poor when they’re a subject matter expert, but tend to assume that the same source is at least passably good when less familiar with the subject.

I’ve had the same experience as the author when doing web development with LLMs: it seems to be doing a pretty good job, at least compared to the mess I would make. But I’m not actually qualified to make that determination, and I think a nontrivial amount of AI value is derived from engineers thinking that they are qualified as such.


Yup — this doesn't match my experience using Rust with Claude. I've spent 2.5 years writing Rust professionally, and I'm pretty good at it. Claude will hallucinate things about Rust code because it’s a statistical model, not a static analysis tool. When it’s able to create code that compiles, the code is invariably inefficient and ugly.

But if you want it to generate chunks of usable and eloquent Python from scratch, it’s pretty decent.

And, FWIW, I’m not fluent in Python.


> Claude will hallucinate things about Rust code because it’s a statistical model, not a static analysis tool.

I think that's the point of the article.

In a dynamic language or a compiled language, its going to be hallucinating either way. If you vibe coding the errors are caught earlier so you can vibe code them away before it blows up at run time.


Static analysis tools like rustc and clippy are powerful, but there are large classes of errors that escape those analyses — e.g. things like off-by-one errors.

> If you vibe coding the errors are caught earlier so you can vibe code them away before it blows up at run time

You can say that again.

I was looking into the many comments for this particular comment and you did hit the nail on the head.

The irony is that it took the entire GenAI -> LLM -> vibe coding cycle to settle the argument that typed language is better for human coding and software engineering.


Sure, but in my experience the advantage is less than one would imagine. LLMs are really good at pattern matching and as long as they have the API and the relevant source code in their context they wont make many/any of the errors that humans are prone to.

Hah... yeah, no, its Python isn't great. I'd definitely workable and better than what I see from 9/10 junior engineers, but it tends to be pretty verbose and over-engineered.

My repos all have pre-commit hooks which run the linters/formatters/type-checkers. Both Claude and Gemini will sometimes write code that won't get past mypy and they'll then struggle to get it typed correct before eventually by passing the pre-commit check with `git commit -n`.

I've had to add some fairly specific instructions to CLAUDE.md/GEMINI.md to get them to cut this out.

Claude is better about following the rules. Gemini just flat out ignores instructions. I've also found Gemini is more likely to get stuck in a loop and give up.

That said, I'm saying this after about 100 hours of experience with these LLMs. I'm sure they'll get better with their output and I'll get better with my input.


To be fair, depending on what libraries you’re using, Python typing isn’t exactly easy even for a human, I spend more time battling with type checkers and stubs than I would like.

I can confirm input matters a lot. I'm a couple of hundred hours ahead of you and my prompting has come along a lot. I recommend test cycles, prompts to reflect on product-implementation fit (eg, is this what you've been asked to do?) and lots of interactivity. Despite what I've written elsewhere in these comments, the best work is a good oneshot followed by small iterations and attentive steering.

With access to good MCP tools, I've had really good experience using claude code to write rust: https://news.ycombinator.com/item?id=44702820

What MCP tools are you using?

LLMs are famously bad at producing rust code. I'm not sure how much of it is the lesser amount of Rust code in the training data, or just the fact that Rust has a very large number of pitfalls, and a large standard library with many edge cases and things you'd imaging should exist but don't for a variety of reasons. Rust also has a much wider variety in the way things could be structured, compared to something like go where there is often only one way of doing a particular thing.

Honestly, I don't think these are problems that Rust has. What I see LLMs struggle with in Rust is more to do with understanding the language semantics at a fundamental level - exactly the things that the compiler statically verifies. For example, they will see things they think are "use-after-free" or "use-after-move", neither of which is a thing in (safe) Rust, because they don't understand that the language does not have these problems.

Largely I think LLMs struggle with Rust because it is one of very few languages that actually does something new. The semantics are just way more different than the difference between, say, Go and TypeScript. I imagine they would struggle just as much with Haskell, Ocaml, Prolog, and other interesting languages.


Obviously you can write a use-after-free in Rust. The fact that it won't compile doesn't really matter when you're feeding the text to a non-compiler program like an LLM. I trust you don't mean to get carried away and suggest that they're somehow grammatically impossible.

I have had very good results using Claude to write Rust. My prompting is often something like

'I have a database table Foo, here is the DDL: <sql>, create CRUD end points at /v0/foo; and use the same coding conventions used for Bar.'

I find it copies existing code style pretty well.


I feel like I have had just as much luck with LLMs writing Rust as I have had with Java, Kotlin, and Swift. Which is better than C++ and worse than Python. I think that mostly comes down to the relative abundance of training data for these types of codebases.

But that is all independent of how the LLMs are used, especially in an agentic coding environment. Strong/static typed languages with good compiler messages have a very fast feedback loop via parsing and typechecking, and agentic coding systems that are properly guided (with rulesets like Claude.md files) can iterate much quicker because of it.

I find that even with relatively obscure languages (like OCaml and Scala), the time and effort it takes to get good outcomes is dramatically reduced, albeit with a higher cost due to the fact that they don't usually get it right on the first try.


> When it’s able to create code that compiles, the code is invariably inefficient and ugly.

Why not have static analysis tools on the other side of those generations that constrain how the LLM can write the code?


> Why not have static analysis tools on the other side of those generations that constrain how the LLM can write the code?

We do have it, we call those programmers, without such tools you don't get much useful output at all. But other than that static analysis tools aren't powerful enough to detect the kind of problems and issues these language models creates.


I'd be interested to know the answer to this as well. Considering the wealth of AI IDE integrations, it's very eyebrow-raising that there are zero instances of this. Seems like somewhat low hanging fruit to rule out tokens that are clearly syntactically or semantically invalid.

I’d like to constrain the output of the LLM by accessing the probabilities for the next token, pick the next token that has the highest probability and also is valid in the type system, and use that. Originally OpenAI did give you the probabilities for the next token, but apparently that made it easy to steal the weights, so they turned that feature off.

It's been tried already and doesn't work. Very often a model needs to emit tokens that aren't valid yet but will become so later.

This can be done: I gave mine a justfile and early in the project very attentively steered it towards building out quality checks. CLAUDE.md also contains instructions to run those after each iteration.

What I'd like to see is the CLI's interaction with VSCode etc extending to understand things which the IDE has given us for free for years.


> When it’s able to create code that compiles, the code is invariably inefficient and ugly.

At the end of the day this is a trivial problem. When Claude Code finishes a commit, just spin up another Claude Code instance and say "run a git diff, find and fix inefficient and ugly code, and make sure it still compiles."


After decades of writing software, I feel like I have a pretty good sense for "this can't possibly be idiomatic" in a new language. If I sniff something is off, I start Googling for reference code, large projects in that language, etc.

You can also just ask the LLM: are you sure this is idiomatic?

Of course it may lie to you...


> You can also just ask the LLM: are you sure this is idiomatic?

I found the reverse flow to be better. Never argue. Start asking questions first. "What is the idiomatic way of doing x in y?" or "Describe idiomatic y when working on x" or similar.

Then gather some stuff out of the "pedantic" generations and add to your constraints, model.md, task.md or whatever your stuff uses.

You can also use this for a feedback loop. "Here's a task and some code, here are some idiomatic concepts in y, please provide feedback on adherence to these standards".


> If I sniff something is off, I start Googling for reference code, large projects in that language, etc.

This works so long as you know how to ask the question. But it's been my experience that an LLM directed on a task will do something, and I don't even know how to frame its behavior in language in a way that would make sense to search for.

(My experience here is with frontend in particular: I'm not much of a JS/TS/HTML/CSS person, and LLMs produce outputs that look really good to me. But I don't know how to even begin to verify that they are in fact good or idiomatic, since there's more often than not multiple layers of intermediating abstractions that I'm not already familiar with.)


I'm not much of a JS/TS/HTML/CSS person either. But if I think something looks off and it's something I care about, then I'll lose a day boning up on that thing.

To your point that you're not sure what to search for, I do the same thing I always do: I start searching for reference documentation, reading it, and augmenting that with whatever prominent code bases/projects I can find.


This motivates the question: if you're doing all this work to verify the LLM, is the LLM really saving you anytime?

After just a few weeks in this brave new world my answer is: it depends, and I'm not really sure.

I think over time as both the LLMs get better and I get better at working with them, I'll start trusting them more.

One thing that would help with that would be for them to become a lot less random and less sensitive to their prompts.


> and I don't even know how to frame its behavior in language in a way that would make sense to search for.

Have you tried recursion? Something like: "Using idiomatic terminology from the foo language ecosystem, explain what function x is doing."

If all goes well it will hand you the correct terminology to frame your earlier question. Then you can do what the adjacent comment describes and ask it what the idiomatic way of doing p in q is.


I think you’re missing the point. The point is that I’m not qualified to evaluate the LLM’s output in this context. Having it self-report doesn’t change that fact, it’s just playing hide the pickle by moving the evaluation around.

Not at all - my point was that it can effectively tutor you sufficiently for you to figure out if the code it wrote earlier was passable or not. These things are unbelievably good at knowledge retrieval and synthesis. Gemini makes lots of boneheaded mistakes when it comes to the finer points of C++ but it has an uncanny ability to produce documentation and snippets in the immediate vicinity of what I'm after.

Sure, that approach could fail in the face of it having solidly internalized an absolutely backwards conception of an entire area. But that seems exceedingly unlikely to me.

It will also be incredibly time consuming if you're starting from zero on the topic in question. But then if you're trying to write related code you were already committed to that uphill battle, right?


I think the concept of "readability" is good, it's a program within Google where your code gets reviewed by an expert in that language (but not necessarily your application / domain); once you're up to a level of writing idiomatic code and fully understanding the language etc, you get readability yourself.

When reviewing LLM code, you should have this readability in the given language yourself - or the code should not be important.



Thank you! I couldn’t remember the term.

> I couldn’t remember the term.

That's lethologica! Or maybe in this specific case lethonomia. [0]

[0] https://en.m.wikipedia.org/wiki/Tip_of_the_tongue


Why I only use it on stuff I can properly judge.

It's been my experience that strongly opinionated frameworks are better for vibe coding regardless of the type system.

For example if you are using rails vibe coding is great because there is an MCP, there are published prompts, and there is basically only one way to do things in rails. You know how files are to be named, where they go, what format they should take etc.

Try the same thing in go and you end up with a very different result despite the fact that go has stronger typing. Both Claude and Gemini have struggled with one shotting simple apps in go but succeed with rails.


In comparison a completely unopinionated framework like fastapi, which got a popularity boost in the early a.i. surge, is a mess to work with if you are vibe coding. Most popular frameworks follow the principle of having no clear way how to do things and leave it up to the developer. Opinionated frameworks got out of fashion after rails but it turns out those are significantly better suited for a.i. assisted development.

You can opinionate Claude remarkably well with context files. I use a very barebones routing framework with my own architecture and Claude knows how all the parts should fit together. I also publish to context files the entire database structure along with foreign key pairings, that made a tremendous difference.

That's an interesting assertion you make there about opinionated frameworks. Do you have a source for that? From my perspective, opinionated frameworks have only gotten more popular. Rails might not be the darling of every startup in existence anymore but I think that's largely down to other languages coming in and adopting the best parts of Rails and crafting their own flavor that plays to the strengths of their favorite programming language. Django, Laravel, Spring Boot, Blazor, Phoenix, etc etc.

While a lot of people here on this platform like to tinker and are often jumping to a new thing, most of my colleagues have no such ideas of grandeur and just want something that works. Rails and it's acolytes work really well. I'm curious to know what popular frameworks you're referencing that don't fit into this Rails-like mold?


I'm not familiar with all frameworks you listed, but i've worked extensively with spring boot and i can assert you that it's not a opinionated framework (as in one way how to do things correctly). Blazor and Phoenix are niche frameworks that don't have wide adoption outside this site. Django has a shared history/competition with Rails but it's also not widely popular.

> We take an opinionated view of the Spring platform and third-party libraries so you can get started with minimum fuss

Spring Boot is definitely opinionated (this is taken from their home page). Maybe not as much as RoR, but saying it isn't at all sounds very strange to me, having worked with it for a few years too...


> Django has a shared history/competition with Rails but it's also not widely popular.

Are you sure? Django is insanely popular. I am not sure on what basis you are saying Django isn't popular. I posit Django is more popular than Ruby on Rails.


Django is super popular.

My experience has been the opposite with Rails because of open-ended patterns with Hotwire. Sure, Rails itself is opinionated but Hotwire provides multiple ways to do the same thing, which confuses LLMs. For example, recently I tried building a form that allows creating related objects inline using modals. Claude 4 Sonnet got quite confused by that request, no matter how much help I provided. It managed in the end but the solution left a lot to be desired for. It can build the same feature using React on it's own with basic instructions.

Same thing with other libraries like HTMX. Using TypeScript with React, and opinionated tools like Tanstack Query helps LLMs be way more productive because it can fix errors quickly by looking at type annotations, and using common patterns to build out user interactions.


I find Claude works extremely well at generating Stimulus controller code. Likely a lack of documentation and git repos with larger Hotwire codebase patterns that it was trained on.

This is pretty anecdotal, but it feels like most of the published rails source code you find online (and by extension, an LLM has found) is from large, stable, and well-documented code.

Claude code with rails is amazing. Should out to Obie for the Claude on rails. Works phenomenally well.

Basically it's like this:

the more constraints you have, the more freedom you have to "vibe" code

and if someone actually built AI for writing tests, catching bugs and iterating 24/7 then you'd have something even cooler


> if someone actually built AI for writing tests, catching bugs and iterating 24/7

This is called a nightly CI/CD pipeline.

Run a build and run all tests and run all coverage at midnight, failed/regressed tests and reduced coverage automatically are assigned to new tickets for managers to review and assign.


"Nightly?"

Iteration speed can never be faster than the testing cycle.

Unless you're building something massive (like Windows or Oracle maybe) nobody is relying on "nightly" integration tests anymore.


Post-merge tests, once a day?

Who does that, we are not in 90s anymore.

Run all the tests and coverage on every PR, block merge on it passing. If you think that's too slow then you need to fix your tests.


We go through maybe 10k CPU hours in our nightly pipeline. Doing that for every PR in a team of 70 people is unsustainable from a cost standpoint.

The existing tests aren't optimal, but it's not going to be possible to cut it by 1-2 orders of magnitude by "fixing the tests"

We obviously have smaller pre-merge tests as well.


> We obviously have smaller pre-merge tests as well.

This. I feel like trying to segregate tests into "unit" and "integration" tests (among other kinds) did a lot of damage in terms of prevalent testing setups.

Tests are either fast or slow. Fast ones should be run as often as possible, with really fast ones every few keystrokes (or on file save in the IDE/editor), normal fast ones on commit, and slow ones once a day (or however often you can afford, etc.). All these kinds of tests have value, so going without covering both fast and slow cases is risky. However, there's no need for the slow tests to interrupt day-to-day development.

I seem to remember seeing something like `<slowTest>` pragma in GToolkit test suites, so at least a few people seem to have had the same idea. The majority, however, remains fixated on unit/integration categorization and end up with (a select few) unit tests taking "1-2 orders of magnitude" too long, which actually diminishes the value of those tests since now they're run less often.


Pssht, so little? With AI you're supposed to have a huge data center and pay them thousands of dollars to process many, many tokens. That way you are doing it right, 24/7.

How else are we going to cover these costs? https://www.youtube.com/watch?v=cwGVa-6DxJM


Have you considered that instead, whatever LLM has the most examples of are what it's best at? Perhaps there's more well-structures Rails code in training than Go?

In my experience Gemini can one-shot go apps. Determining it requires sound eval instead of anecdotes.

I'd really like to know what type of apps you're actually one-shotting with an AI. Seriously, can you please give me some example code or something because it seems like anything past a trivial program that doesn't actually do what you specified is far beyond their capabilities.

if AI could really one-shot important, interesting apps, shouldn’t we be seeing them everywhere? where’s the surge of new apps that are so trivial to make? who’s hiding all this incredible innovation that can be so easily generated?

If AI could really accelerate or even take over the majority of work on an established codebase, we should be seeing a revolution in FOSS libraries and ecosystems. The gap has been noted many times, but so far all anyone's been able to dig up are one-off, laboriously-tended-to pull requests. No libraries or other projects with any actual downstream users.

It's taken over my mature codebase just fine. I'm not in the business of spending tokens on open source projects though.

But plenty of maintainers are in the business of spending mass amounts of time, energy, and actual money on open source projects. Some make a business out of it. Some are sponsored by their employer to spend paid work hours on FOSS projects. If LLMs could help them, some significant number would.

But if there are any instances of this, I have not seen them, and seemingly neither has anyone I've posed the question to, or any passersby.


How would you know? I don't label my changes that were made by AI.

Somebody would. Somebody would be an AI evangelist, or would become one. The FOSS ecosystem is large enough to be sure of that. We're not seeing nothing, we're just not seeing at all what the marketers and AInfluencers are prophesying. We're not even seeing what you describe. Why is that? Why is it limited to random commenters and not seen at all in the wild?

I've made another comment in this thread about a nice tool I one-shotted. The reason I don't publish anything now is because in the UK at least, companies are not behaving will with relation to IP: many contracts specify that anything you work on that can be expected of you in the course of your duties belongs to the company, and tribunals have upheld this.

There's also a bit of a stigma about vibe coding: career wise, personally I worry that sharing some of this work will diminish how people view me as an engineer. Who'd take the risk if there might be a naysayer on some future interview panel who will see CLAUDE.md in a repo of yours and assume you're incompetent or feckless?

Plus, worries about code: being an author gives you a much higher level of control than being an author-reviewer. To err as a writer is human, to err as a reader has bigger consequences.


I did a flask application that read an AWS account's glue resources, displayed them based on category (tag of "databasename" and "driver" etc) and offered the ability to run those jobs in serial or parallel, with a combined job status page for each batch. It also used company colours because I told it to pick a colour palette from the company website. It worked first time and produced sane, safe code.

There was a second shot, which was to add caching of job names because we have a few hundred now.

(Context: I'm at a company that has only ever done data via hitting a few hand replicated on prem databases at the moment and wanted to give twitchy folks an overview tool that was easy to use and look at)


My experience with Gemini has been pretty dismal. The CLI works much better than the VS code extension and both of them have struggled with one shotting go. Single files or single functions no problem though.

Weird, I thought Go was one of the go-to examples in HN for languages that LLMs work well with, precisely because it's opinionated and has many standard libs. Not that I've tried, my attempts at vibe coding felt disappointing, but I think this contradicts the zeitgeist?

I work in both ruby and go. There is no comparison ai is way better with ruby (rails).

Hmm I can imagine that while LLMs are good at producing working code in Go they might not be as good at structuring larger applications, compared to building on opinionated frameworks.

I imagine there could be some presets out there that guide the vibe-coding engines to produce a particular structure in other languages for better results.


Is that specific to using Rails or it is good with plain Ruby as well?

Rails, I haven't tried at all with plain ruby, but I doubt. I think formulaic = static typing for AI

Well yeah, it's like how a 5 year old can talk about what they want in their sandwich but will probably struggle to describe the flavours and textures they enjoy in detail.

it sounds like you should have just been writing configuration this whole time?

I've been using flask and the results are remarkable. Remarkable to the point where I've one-shotted rather good things that I'm now using daily.

This isn't a fully formed thought, but could this be mitigated by giving LLMs your opinions? I am using copilot in more of a pair programming manner and for everything I want to make I give a lot of my opinions in the prompt. My changes are never too large though, a hundred lines of diff at most.

While I agree with the main thesis here, I find this extremely worrying:

> I am amazed every time how my 3-5k line diffs created in a few hours don’t end up breaking anything, and instead even increase stability.

In my personal opinion, there's no way you're going to get a high quality code base while adding 3,000 - 5,000 lines of code from LLMs on a regular basis. Those are huge diffs.


Yeah imagine one of your colleagues doing this to your code base...

Imagine a reviewer that doesn't block that patch immediately.

Of course, there might be some exceptions like if the codebase for some reason has some massive fixed tables or imports upstream files that may get updated occasionally. Those end up as massive patches or sets.


Well it's possible the LLM is re-creating the file to do it's updates. I've noticed in the past that LLMs tend to prefer doing that.

Not to different from how a college CS student who hasn't learned git yet would do come to think of it.

Still pretty bad if the author isn't taking the time to at least cull the changes. Though guess it could just be file renames?


So history is going to be impossible to understand because every change is a total re-write of all affected files? I suppose that doesn't matter if you never actually try to investigate yourself and instead just tell your computer to fix the bug. You'd better hope it can though.

As long as at least half of the lines are well-written tests, this is more achievable than you'd think.

Yes. From experience, for a relatively complex system, 1k+ line PRs from mid-level devs without tests are almost guaranteed to have bugs; often nasty ones which can take many hours to identify and fix.

I remember when I started coding (decades ago), it would take me days to debug certain issues. Part of the problem was that it was difficult to find information online at the time, but another part of the problem was that my code was over-engineered. I could churn out thousands of lines of code quickly but I was only trying to produce code which appeared to work, not code which actually worked in all situations. I would be shocked when some of my code turned out to break once in a while but now I understand that this is a natural consequence of over-complicating the solution and churning out those lines as fast as I could without thinking enough.

Good code is succinct; it looks like it was discovered in its minimal form. Bad code looks like it was invented and the author tried to make it extra fancy. After 20 years coding, I can tell bad code within seconds.

Good code is just easy to read; first of all, you already know what each function is going to do before you even started reading it, just by its name. Then when you read it, there's nothing unexpected, it's not doing anything unnecessary.


As has been said, actual evals are needed here.

Anecdotally, the worst and most common failure mode of an agent is when an agent starts spinning its wheels and unproductively trying to fix some error and failing, iterating wildly, eventually landing on a bullshit (if any) “solution”.

In my experience, in Typescript, these “spin out” situations are almost always type-related and often involve a lot of really horrible “any” casts.


Right, I've noticed agents are very trigger happy with 'any'.

I have had a good time with Rust. It's not nearly as easy to skirt the type system in Rust, and I suspect the culture is also more disciplined when it comes to 'unwrap' and proper error management. I find I don't have to explicitly say "stop using unwrap" nearly as often as I have to say "stop using any".


Experienced devs coming in to TypeScript are also trigger happy with 'any' until they work out what's going on. Especially if they've come from Javascript.

I’ve tried enforcing no-explicit-any just to have the agent disable the linter rule. I guess I didn’t say you couldn’t do that…

LLMs are minimizing energy to solve problems, and if they can convince the human to go away happy with 'any', so be it.

There's a fine line between gradient descent, pedantry, and mocking. I suspect we will learn more about it.


The question can be asked two ways:

(1) Are current LLMs better at vibe coding typed languages, under some assumptions about user workflow?

(2) Are LLMs as a technology more suited to typed languages in principle, and should RL pipelines gravitate that way?


This is why I have very specific ruleset and linting for my LLMs, not allowing any at all and other quality checks.

Is this a shareable ruleset? I would completely understand if not but I’m interested in learning new ways to interact with my tools.

I tend to have three layers of "rulesets", one general one I reuse across almost any coding task (https://gist.github.com/victorb/1fe62fe7b80a64fc5b446f82d313...), then language specific ones, and finally project specific ones. Concat them before injecting into the system prompt.

Second this method.

The one thing I would really recommend adding to your constraints is to Don't Repeat Yourself - always check if something already exists. LLMs like to duplicate functionality, even if it's included in their context.

Can I ask why you have asked it to avoid abstractions? My experience has been that the old rules, such as avoid premature abstraction or premature optimization, don't apply as cleanly because of how ephemeral and easy to write the actual code is. I now ask the LLM to anticipate the space of future features and design modular abstractions that maximize extensibility.


> Can I ask why you have asked it to avoid abstractions?

Some models like to add abstractions regardless of their usefulness (Google's models seems excessively prone to this for some reason), so ended up having to prompt it away so it lets me come up with whatever abstractions are needed. The rules in that gist is basically just my own coding guidelines put in a way that LLMs can understand them, when I program "manually" I program pretty much that way.

I have yet to find any model that can properly plan feature implementations or come up with designs that are proper, including abstractions, so that's something I do myself at least for now, the system prompts mostly reflect that workflow too.

> because of how ephemeral and easy to write the actual code is

The code I produce isn't ephemeral by any measure I understand that word, anything I end up using stays where it is until it gets modified. I'm not doing "vibe coding" which it seems you're doing, might need some different prompts for that.


Until the agent disables the linter rule without you noticing!

Yup. I've watched both Claude and especially Gemini get frustrated trying to deal with my pre-commit checks (usually mypy) and deciding to do `git commit -n` even though my rules tell explicitly, multiple times, that it's never okay to bypass the pre-commit checks.

I know you are joking, but I have them injected into the tools they use, they run automatically every time they run commands to write, update etc. I can configure those to block the file edits completely, or just as feedback every time after. This is restricted outside of codebase, but of course they could find a loophole to hack the whole thing though o they could just get frustrated and run a recursive loop script that would crash my computer :)

Setting up linting with noExplicitAny is essential. But that won’t stop them from disabling it when they can’t figure something out. They’re sneaky little bastards.

Any type should be forbidden for LLMs as in compiled typed languages.

This claim needs to be backed up by evals. I could just as well argue the opposite, that LLMs are best at coding Python because there are two orders of magnitude more Python in their training sets than C++ or Rust.

In any case, you can easily get most of the benefits of typed languages by adding a rule that requires the LLM to always output Python code with type annotations and validate its output by running ruff and ty.


> In any case, you can easily get most of the benefits of typed languages by adding a rule that requires the LLM to always output Python code with type annotations and validate its output by running ruff and ty.

My personal experience is that by doing exactly that, the productivity, code readability, and correctness goes through the roof, at a slight increase in cost due to having to iterate more.

And since that is an actual language-independent comparison, it leads me to believe that yes, static typing does in fact help substantially, and that the current differences between vibe coding languages are, just like you say, due to the relative quantity of training data.


My experience with Github Copilot and Python has been that it _does_ generate better code completions for Python. It's sometimes shockingly good at predicting what you want to do in the next 30-50 lines of code based on a few well named variables. But that shockingly good code is also filled with hallucinated classes, methods, parameter ordering, etc. which completely negate its usefulness.

I agree that the training sets for LLMs have much more training data for Python than for Rust. But C++ has existed before Python I believe. So I doubt there is 2 orders of magnitude of Python code more than C++.

You miss how many fewer programmers were there in the early years, how much of that code was ever public, and even if it was, how useful it was, as C++ has changed drastically since, say, what we used to write in 2001.

Python is pretty old, so I had a quick look.

https://en.wikipedia.org/wiki/C%2B%2B#History

In 1985, the first edition of The C++ Programming Language was released, which became the definitive reference for the language, as there was not yet an official standard.[31] The first commercial implementation of C++ was released in October of the same year.[28]

In 1998, C++98 was released, standardizing the language, and a minor update (C++03) was released in 2003.

https://en.wikipedia.org/wiki/History_of_Python

The programming language Python was conceived in the late 1980s,[1] and its implementation was started in December 1989[2] by Guido van Rossum at CWI in the Netherlands as a successor to ABC capable of exception handling and interfacing with the Amoeba operating system.[3]

Python reached version 1.0 in January 1994.

Of course it's hard to say how much that is reflected in code available and is any of the old code still valid input for modern use. It does broadly look like c++ is older, in general.


It's not just a question of whether there is more actual code in a given language, but how much is available in the public and private training data.

I've done work on reviewing and fine-tuning training data with a couple of providers, and the amount of Python code I got to see at least out-distanced C++ code by far more than 2 orders of magnitude. It could be a heavily biased sample, but I have no problems believing it also could be representative.


> But C++ has existed before Python I believe.

Sure, C++ is 42 years old, Python is “only” 34. Both are older than the online code hosts (or even the web itself) from which the code for AI training data is sourced, so age probably isn't a key factor in how much code of each is there, popularity with projects hosted in accessible public code repos is more relevant.


ty still misses things caught by mypy. It also doesn't have the same level of support for Pydantic yet. I use it (because it's so damn fast), but along with mypy, not a replacement yet.

Yes, mypy is slow, but who cares if it's the agent waiting on it to complete.


Yep, I prefer pyright though, but ty is too early to be relied on (though I love love ruff so I'm sure they will get there.(

I think you vastly overestimate the capacity of Python typing.

The logic above can support exactly the opposite conclusion: LLM can do dynamic typed language better since it does not need to solve type errors and save several context tokens.

Practically, it was reported that LLM-backed coding agents just worked around type errors by using `any` in a gradually typed language like TypeScript. I also personally observed such usage multiple times.

I also tried using LLM agents with stronger languages like Rust. When complex type errors occured, the agents struggled to fix them and eventually just used `todo!()`

The experience above can be caused by insufficient training data. But it illustrates the importance of eval instead of ideological speculation.


In my experience you can get around it by having a linter rule disallowing it and using a local claude file instructing it to fix the linting issues every time it does something.

You can equally get around a significant portion of the purported issues with dynamically typed languages by having Claude run tests, and try to run the actual code.

I have no problem believing they will handle some languages better than others, but I don't think we'll know whether typing makes a significant difference vs. other factors without actual tests.


I always instructions to have the LLM run `task build` before claiming a task is done.

Build runs linters and tests and actually builds the project, kinda-sorta confirming that nothing major broke.


it does not always work in my experience due to complex type definitions. Also extra tool calls and time are needed to fix linting.

>The logic above can support exactly the opposite conclusion: LLM can do dynamic typed language better since it does not need to solve type errors and save several context tokens.

If the goal is just to output code that does not show any linter errors, then yes, choose a dynamically typed language.

But for code that works at runtime? Types are a huge helper for humans and LLMs alike.


Or just bad training data. I've seen "any" casually used everywhere.

It's not so much typing that is valuable for vibecoding, but being able to give the agent hooks into tooling that provides negative feedback for errors. The easiest is typing, sure, because it's built into the compiler. But you can also add in static analysis linters and automated testing, including - notably - testing for performance.

Of course, you have to tell the agent to set up static analysis linters first, and tell the agent to write tests. But then it'll obey them.

The reason why large enterprises could hire armies of juniors in the past, safely, was because they set up all manner of guardrails that juniors could bounce off of. Why would you "hire" a "junior" agent without the same guardrails in place?


Exactly this. The ability of LLMs to write code is going to strongly depend on the availability and quantity of training data. But agentic coding is more than just LLMs, it is also the various abilities that give feedback to the LLM to refine the resulting code...and that is something that strongly typed and statically typed languages do so much better than their weak/dynamic counterparts.

I've noticed a fairly similar pattern. I particularly like vibecoding with golang. Go is extremely verbose, which makes it almost like an opposite perl - writing go is a bad experience, but reading go is delightful. The verbosity of golang makes it so you're able to always jump in and understand context, often from just a single file.

Pre-llms, this was an up front cost when writing golang, which made the cost/benefit tradeoff often not worth it. With LLMs, the cost of writing verbose code not only goes down, it forces the LLM to be strict with what it's writing and keeps it on track. The cost/benefit tradeoff has increased greatly in go's favor as a result.


No shade on Go but you kinda just said that the language has always looked like AI generated code and this works in its favor now because you don’t actually have to write it anymore. Funny, but not sure I’d consider that in Go’s favor.

My experience with Python and Scala so far is different. With Python the LLM's do a pretty good job. The code always compiles, sometimes there are some logical or architectural errors but that's it.

With Scala, I have to give the LLM a super simple job, e.g. creating some mock data for a unit test, and even then it frequently screws up; every now and then it gives me code that doesn't even compile. So much for Scala's strong type system ..


Less training data for Scala is the issue probably. Imagine how much Python is out there.

I've been asking it to spit out python all day long and it just flies with it. Ask all the LLMs most of them will tell you Python is the top if not preferred language.

I can vibecode in Rust but I don't like the result. There are too many lines of code and they are too long and contain too many symbols and extra stuff.

Just compare SeaORM with Ruby + sequel where you just inherit the Sequel::Model class and Sequel reads the table schema without you having to tell it to. It gives you objects with one method for each column and each value has the correct type.

I was happy with Ruby's performance 15 years ago and now it's about 7-20x with a modern ruby version and CPU, one a single thread.

AI is still helpful to learn but it doesn't need to do the coding when using Ruby. I think the same criteria apply with or without AI for choosing a language. Is it a single-person project? Does it really require highly optimized machine code? etc.


Sounds like you really need to have a look at loco.rs! Given the right constraints, I’m fairly productive with CC

I've been vibe-coding for weeks in Rust, and it works great.

I've been vibe-coding for a few days in Haskell, and I don't like the result.

Maybe I am just accustomed to being ok with verbose Rust, while Haskell comes with a great potential for elegance that the LLM does not explore.

Regardless, the argument that types guide the LLM in a very reliable way holds in both cases.


My experience with Haskell has been the same. The GHC provides stellar feedback, so the LLM is almost always able to bang the code into working order, but wow is that code bloated.

The closest we got to vibe coding pre-LLMs was using a language with a very good strong type system in a good IDE and hitting Ctrl-Space to autocomplete your way to a working program.

I wonder if LLMs can use the type information more like a human with an IDE.

eg. It generates "(blah blah...); foo." and at that point it is constrained to only generate tokens corresponding to public members of foo's type.

Just like how current gen LLMs can reliably generate JSON that satisfies a schema, the next gen will be guaranteed to natively generate syntactically and type- correct code.


> I wonder if LLMs can use the type information more like a human with an IDE.

Just throw more GPUs at the problem and generate N responses in parallel and discard the ones that fail to match the required type signature. It’s like running a linter or type check step, but specific to that one line.


We have infinite uranium anyway !


You already can use LLM engines that force generation according to an arbitrary CFG definition. I am not aware of any systems that apply that to generating actual programming language code.

My experience suggests the opposite of what this article claims. Claude Code is ridiculously good with vanilla JavaScript, provided that your code is well written. I tried it with a TypeScript code base and it wasn't anywhere near as good.

With JS, Claude has very high success rate. Only issue I had with it was that one time it forgot to update one part of the code which was in a different file but as soon as I told it, it updated it perfectly.

With TypeScript my experience was that it struggles to find things. Writing tests was a major pain because it kept trying to grep the build output because it had to mock out one of the functions in the test and it just couldn't figure it out.

Also typed code it produces is more complex to solve the same problem with more different files and it struggles to get the right context. Also TS is more verbose (this is objectively true and measurable); requires more tokens so it literally costs more.


The real win isn't static vs dynamic typing. It's immediate, structured feedback for LLM iteration. cargo check gives the LLM a perfectly formatted error it can fix in the next iteration. Python's runtime errors are often contextless ('NoneType has no attribute X') and only surface after execution. Even with mypy --strict, you need discipline to check it constantly. The compiler makes good feedback unavoidable.

Printing stack traces generates a lot of useful context but it's not done enough.

Writing rust and the LLM almost never gets function signatures and returns types wrong.

That just leaves the business logic to sort out. I can only imagine that IDEs will eventually pair directly with the compiler for instant feedback to fix generations.

But rust also has traits, lifetimes, async, and other type flavors that multiples complexity and causes issues. It also an in progress language… im about to add a “don’t use once cell.. it’s part of std now “ to my system prompt. So it’s not all sunshine, and I’m deeply curious how a pure vibe coded rust app would turn out.


Gemini has been doing a fantastic job for me for Rust

I have the same impressions. Typing helps a lot, and (I think) in a few ways - one is being a safe guard, second a constraint (so say, AI is less likely to create a clunky variable which can be a string, list, or a few other things), third - to prompt into writing solid code in general.

I add one more step - add strong linting (ESLint with all recommended rules switched on, Ruff for Python) and asking to run it after each edit.

Usually I also prompt to type things well, and avoid optional types unless strictly necessary (LLMs love to shrink responsibility that way).

For example, see my recent vibe-coding instructions, https://github.com/QuesmaOrg/demo-webr-ggplot/blob/main/CLAU....


Folks here may be interested in checking out Isograph. In [this conference talk](https://www.youtube.com/watch?v=sf8ac2NtwPY), I vibe code an Isograph app, and make non-trivial refactors to it using Cursor. This is only feasible because the interface between components is very simple, and all the hard stuff (generating a query for exactly the needed data, wiring things up, etc.) is done deterministically, by a compiler.

It's not quite the same principal OP articulates, which is that a compiler provides safety and that certainty lets you move fast when vibe coding. Instead, what I'm claiming is that you can move fast by allowing the LLM to focus on fewer things. (Though, incidentally, the compiler does give you that safety net as well.)


I'm really shocked at how slow people are to realize this, because it's blindingly obvious. I guess that just shows how much the early adopter crowed is dominated by python and javascript.

(BTW the answer is Go, not Rust, because the other thing that makes a language well suited for AI development is fast compile times.)


My experience with agent-assisted programming in Rust is that the agent typically runs `cargo check` instead of `cargo build` for this exact reason -- it's much faster and catches the relevant compilation errors.

(I don't have an opinion on one being better than the other for LLM-driven development; I've heard that Go benefits from having a lot more public data available, which makes sense to me and seems like a very strong advantage.)


I think this is true -- especially for new code.

I did this not knowing any rust: https://github.com/KnowSeams/KnowSeams and rust felt like a very easy to use a scripting language.


Really cool stuff, I appreciate you sharing this.

Although, to be fair this is far from vibecoding. Your setup, at a glance, says a lot about how you use the tools, and it's clear you care about the end result a lot.

You have a PRD file, your tasks are logged, each task defines both why's and how's, your first tasks are about env setup, quality of dev flow, exploration and so on. (as a nice tidbit, the model(s) seem to have caught on to this, and I see some "WHY:" as inline comments throughout the code, with references to the PRD. This feels nice)

It's a really cool example of "HOW" one should approach LLM-assisted coding, and shows that methods and means matter more than your knowledge in langx or langy. You seem to have used systems meant to help you in both speed of dev and ease of testing that what you got is what you need. Kudos!

I might start using your repo as a good example of good LLM-assisted dev flows.


That seems a little bit dangerous, why not do it in a language you know ? Plus, this is not launching rockets on the moon, it's a sentence splitter with a fancy state machine (probably very useful in your niche, not a critique) - the difficulty was for you to put the effort to build a complicated state machine, the rest was frankly... not very LLM-needing and now you can't maintain your own stuff without Nvidia burning uranium.

Did the LLM help at all in designing the core, the state machine itself ?


Nah it was a hobby project because I was laid off for a bit.

Rust's RegEx was perfect because it doesn't allow anything that isn't a DFA. Yes-ish, the LLM facilitated designing the state machine, because it was part of the dev-loop I was trying out.

The speed is primarily what enabled finding all of the edge cases I cared about. Given it can split 'all' of a local project gutenberg mirror in < 10 seconds on my local dev box I could do things I wouldn't otherwise attempt.

The whole thing is there in the ~100 "completed tasks" directory.


I've been wondering about this for some time. My initial assumption was that would be that LLMs will ultimately be the death of typed languages, because type systems are there to help programmers not make obvious mistakes, and near-perfect LLMs would almost never make obvious mistakes. So in a world of near-perfect LLMs, a type system is just adding pointless overhead.

In this current world of quite imperfect LLMs, I agree with the OP, though. I also wonder whether, even if LLMs improve, we will be able to use type systems not exactly for their original purpose but more as a way of establishing that the generated code is really doing what we want it to, something similar to formal verification.


Even near-perfect LLMs would benefit from the compiler optimizations that types allow.

However perfect LLMs would just replace compilers and programming languages above assembly completely.


It's interesting to think about what is 'optimal' when discussing LLMs; considering that the cost is per-token. So assembly would be far from optimal as it is not exactly a succinct language... A lot of common operations are repetitive and require many operations; a more abstract, higher level language might actually be inherently more succinct.

It's not just that humans aren't good at thinking in assembly language or binary, but the operations are much more granular and so it requires a lot of operations to do express something as simple as a for-loop or a function call.

I think the perfect AI might actually come up with a language closer to Python or JavaScript.


> I am managing projects in languages I am not fluent in—TypeScript, Rust and Go—and seem to be doing pretty well.

> It seems that typed, compiled, etc. languages are better suited for vibecoding, because of the safety guarantees. This is unsurprising in hindsight, but it was counterintuitive because by default I “vibed” projects into existence in Python since forever

[...]

> For example, I refactored large chunks of our TypeScript frontend code at TextCortex. Claude Code runs tsc after finishing each task and ensures that the code compiles before committing. This let me move much faster compared to how I would have done it in Python, which does not provide compile-time guarantees.

While Python doesn't have a required compulation step, it has both a standard type system and typecheckers for it (mypy, etc.) that are ubiquitous in the community and could be run at the same point in the process.

I would say it's not just Rust, TypeScript, and Go that the author has a weak foundation in.


Here’s a study that found that for small problems Gemini is almost equally good at Python and Rust. Looking at the scores of all the languages tested, it seems that the popularity of the language is the most important factor:

https://jackpal.github.io/2025/03/29/Gemini_2.5_Pro_Advent_o...


But isn't it the case that Python is vastly more popular than Rust?

If Gemini is equally good at them in spite of that, doesn't that mean it'd be better at Rust than at Python if it had equal training in both?


The study points out, “Python and Rust are the two most popular languages used by Advent of Code participants. This may explain why Rust fares so well.”

Ah, that makes a lot more sense. Thanks!

I'm not sure I agree with the author's conclusion. While python was never a great language for large codebases and it thrived because people with little development knowledge could get going pretty easily, a large part of its current appeal is the profusion of great specialized libraries which you would have to code yourself in other languages.

I suspect vibe coding will not be a good fit for writing these libraries, because they require knowledge and precision which the typical vibe coding use probably doesn't show, or the willingness to spend time on the topic which is also typically not what drives people to vibe coding.

So my conclusion would be that vibe coding drives the industry to solidify around already well-established ecosystem, since less of the people producing code will have the time, knowledge and/or will to build that ecosystem in newer languages. Whether that drive is strong enough to be noticable is another question.


Then again, LLMs are well-suited to translate stuff, a relatively grunt work kind of task, so porting libs to your ecosystem of choice is a lot more feasible now.

Perhaps there is a future where individuals can translate large numbers of libraries, and instead of manually porting future improvements of the original versions to the copies, just rerun the translation as needed.


Yup, I recently started doing more development in Nim. I love the language, but the user community is (currently) small, which means the ecosystem of libraries available isn't as big. But LLMs are a massive equalizer here and has made it a lot easier for me to get things done with Nim.

Such extraordinary claims, require extraordinary evidence. Not "vibes"

> It seems that typed, compiled, etc. languages are better suited for vibecoding, because of the safety guarantees.

There are no "safety guarantees" with typed, compiled languages such as C, C++, and the like. Even with Go, Rust and others, if you don't know the language well enough, you won't find the "logic bugs" and race conditions in your own code that the LLM creates; even with the claims of "safety guarantees".

Additionally, the author is slightly confusing the meaning of "safety guarantees" which refers to memory safety. What they really mean is "reasoning with the language's types" which is easier to do with Rust, Go, etc and harder with Python (without types) and Javascript.

Again we will see more of LLM written code like this example: [0]

[0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...


Why not just tell the LLM in rules to strongly type and write guardrails and then:

[tool.ruff]

line-length = 88

select = ["E", "F", "W", "I", "N", "UP", "B", "C4"] # A good strict baseline

ignore = []

[tool.mypy]

python_version = "3.12"

warn_return_any = true

warn_unused_configs = true

disallow_untyped_defs = true

disallow_any_unimported = true

no_implicit_optional = true

check_untyped_defs = true

strict = true


Yep this is pretty much what I do. I also added a /check command to explcicitly runn all those checks + pyright and ruff format.

The language using the fewest punctuation tokens is going to be the safest from most categories of hallucination, and give each context window the greatest usable space for vector manipulation headed into self-attention before the model suffers from "vector-clouded judgment" due to overcrowded latent space.

I am comfortable with both Python and Go. I prefer Go for performance; however, the earlier issue was verbosity.

It is easier to write things using a Python dict than to create a struct in Go or use the weird `map[string]interface{}` and then deal with the resulting typecast code.

After I started using GitHub Copilot (before the Agents), that pain went away. It would auto-create the field names, just by looking at the intent or a couple of fields. It was just a matter of TAB, TAB, TAB... and of course I had to read and verify - the typing headache was done with.

I could refactor the code easily. The autocomplete is very productive. Type conversion was just a TAB. The loops are just a TAB.

With Agents, things have become even better - but also riskier, because I can't keep up with the code review now - it's overwhelming.


LLMs also write good C, if well directed. My feeling is that this is not really about C or something inherent to Python (where I get not stellar results), but to the large low quality Python code bases that are out there. Basically my hypothesis is that, within the training set, there are languages with better examples and languages with worse examples. I found that to write better Python, prompt engineering goes a great length: especially stressing of not using not really needed dependencies, to write simple, avoid trivial asserts that are not really useful, and so forth.

I have found this to be true as well. Although I exclusively used python and R at work and tried CC several times for small side projects, it always seemed to have problems and ended up in a loop trying to fix its own errors. CC seems much better at vibe coding with typescript. I went from no knowledge of node.js development to deploying reasonable web app on vercel in a few days. Asking CC to run tsc after changes helps it fix any errors because of the faster feedback from the type system compared to python. Granted this was only for a personal side project and may not be true for production systems that might be much larger, I was pleasantly surprised how easy it was in typescript compared to python

It may be a Claude specific thing. I tried to ask Claude to various tasks in machine learning, like implement gradient boosting without specifying the language, thinking it will use Python since it is the most common option and have utilities like Numpy to make it much easier. But Claude mostly choose Javascript for the language and somehow managed to do it in JS.

> I was pleasantly surprised how easy it was in typescript compared to python

It's time for people to wake up and stop using Python, and forcing me to use Python


it aint great at c# i can tell you. this from grok yesterday:

foreach (string enumName in Enum.GetNames(typeof(Pair)))

{

  if (input.Contains($"${enumName}"))

The argument against Python is weak because Python can be written with types. Moreover, the types can be checked for correctness by various type checkers.

The issue is those who don't use type checkers religiously with Python - they give Python a bad name.


My experience with LLMs in Rails has been... pretty bad. It isn't good at tracking 'context' (not in the technical token sense) and constantly gets lost in the sauce and doing weird stuff.

Given Rail's maturity, i would have expected otherwise - there is tons of Ruby/Rails code to train on, but... yeah.

OTOH, doing some side-project stuff in TS, and the difference is a little mindblowing. I can see the hype behind vibecoding WAY more.


I have not found this to be the case at all. Type mismatches have been very common in Java, C++ and Objective-C inference output. I think there is complexity in what contributes to LLM suitability to programming tasks, and the nature and history of APIs relevant to the ask are a big part of that. Seems that the OP really loves their types, like many here, and this article is just more evangelism.

I've found most LLMs I've tried generate better code in typed, procedural languages than they do in something like Clojure.

From the perspective of a primarily backend dev who knows just enough React/ts to be dangerous, Claude is generating pretty decent frontend code, letting me spend more time on the Rust backend of my current side project.


> generate better code in typed, procedural languages

Better in what sense? I've been using Anthropic models to write in different Lisps - Fennel, Clojure, Emacs Lisp, and they do perform a decent job. I can't always blindly copy-and-paste generated code, but I wouldn't do that with any PL.


You could just leave it at "Typed languages are better."

Interesting...my experience has been that LLMs are generally better at more common languages (not surprising: more data exists in those languages!). So, my admittedly amateur vibe coding experiences have been best in Python and pretty vanilla web development setups. When I push LLMs to, say, fit advanced statistical models in R, they fall apart pretty badly. Yet they can crush a PyTorch or SciKitLearn task no problem.

This. This is the most important thing to consider: the available corpus the model was trained on. Remember that LLMs are inferring code. They don't "know" anything at all about its axiomatic workings. They just know what "looks right" and what "looks wrong". Agentic and RL are about to make this philosophy obsolete on grand scale, but signs still don't look good for being any to improve how much they can "hold in their head" to infer what token to spit out next from the vector embedding, tho.

i find claude is very good with elixir, which is a dynamically typed language. i suspect strong conventions and value immutability help.

I’ve not had good success with vibing rust. It requires lots and lots of handholding and editing. Perhaps it’s because the model is always trying to do things from scratch. It does a poor job of finding crates and understanding the docs and implementation.

All existing programming languages are designed for human beings. Is it the right time to design something that is specifically for vibe coding? For example, ease of read/understanding is probably much more important than all the syntactic sugars to reduce typing. Creating ten ways to accomplish the same task is not useful for LLMs.

This is the complete opposite of how LLMs are trained. LLMs are most effectively prompted (for instruct/chat finetunes anyway, i.e. chatbots) through the same kind of language patterns (natural or formal/programming) that they learn from. Trying to write formal prompts to them is exactly as misguided as speaking to your friends and family in C.

I've been wondering if Java would have a resurgence due to strong typing even into the error types, and widespread runtime availability. But so far, seems no.

Ease of understanding; JavaScript. That was literally its design goal; JS might have a whole lot of bad parts but it's flexible and easy to understand.

I have made the exact opposite with Claude and low level C. Claude is very good in writing classic c functions you need on a daily basis. I often wonder how much defensive coding it puts into the functions. I for myself let any code I write at least be read one time by Claude now

Totally agree. With ai coding, ensuring correctness is critical. Having types and compile-time checks helps a lot.

Typed but maybe with the exception of the likes of Swift where Claude reveals just how complex and ambiguous the language can be. The lack of documentation and overly complex proposal documents also appear to overload the LLM context and confuse them.

I can see this making sense purely from a tool chain perspective. If we’re are entering the age of treating code like cattle then it would make sense overly verbose and strict languages may benefit from it.

So if for whatever reason it is better for vibe coding, then legacy code aside, why would anyone not use a technology that makes it a bit easier for them to understand what the AI is actually churning out on their behalf?

Typed languages are also better suited to IDE assistance and static analysis

I'm a relatively old school lisp fan, but it's hard to do this job for a long time without eventually realizing helping your tools is more valuable than helping yourself


Nim might hit the sweet-spot here: typed, compiled, and Python-like.

I wrote this [1] comment a few weeks ago:

""" ... Claude Code is surprisingly good at writing Nim. I just created a QuickJS + MicroPython wrapper in Nim with it last week, and it worked great!

Don't let "but the Rust/Go/Python/JavaScript/TypeScript community is bigger!" be the default argument. I see the same logic applied to LLM training data: more code means more training data, so you should only use popular languages. That reasoning suggests less mainstream languages are doomed in the AI era.

But the reality is, if a non-mainstream language is well-documented and mature (Nim's been around for nearly 20 years!), go for it. Modern AI code gen can help fill in the gaps. """

[1]: https://news.ycombinator.com/item?id=44400913


I programmed my services in Python without any convention and I suffered a lot. Now, I do it in typed languages with a strong compulsory conventions and things are far more manageable.

Python has static typing unless you don't add any types. The vast majority of reputable Python codebases nowadays use static typing rigorously. If you don't, you should. To enforce it when coding with an agent you can either tell the agent to run the type checker after every edit (e.g. via a hook in Claude Code), or if you're using an agent that has access to the LSP diagnostics then tell it to look at them and demand that they are clean after every edit (easy with Cursor, and achieveable in Claude Code I believe via MCP).

Why isn't the agent smart enough to recognize typed Python code existing in a project or detect that an explicit py.typed file exists?

In the case of Claude Code the hook feature is ideal for this so I could imagine the designers deciding that it is more appropriate to put the user in control. That said I think I do agree with you that -- given Python's fairly unique position of having good static typing but not requiring it -- the agents should default to running the type checker if they see it configured in pyproject.toml.

> The vast majority of reputable Python codebases nowadays use static typing rigorously

As judged by who? And in what field?

I mean, if I look at the big Python libraries I use regularly none of them have types - Django, DRF, NumPy, SciPy, Scikit-learn. That’s not to say there aren’t externally provided stubs but the library authors themselves are often not the ones writing them


Yes fair enough my wording wasn't great. And we can add cpython to your list... But the provided type stubs for those libraries make the resulting user experience the same as if they had types. Does Django ORM have a good typing experience via stubs btw? I know that one's always been a challenge.

Overall though my point was that the article, and most comments here, were completely misrepresenting the situation regarding Python. It's a statically typed language for those that want it to be. There's no need to attempt to run any code that hasn't passed a type checker. And it's an expressive type system; much more so than Go which has been mentioned in comments.

However the fact that the standard library documentation doesn't have types is embarrassing IMO.


They do and they don't, there's often mismatches where the library gets updated and the stubs haven't been. It makes adopting difficult to recommend in some cases, especially if the library is under more flux.

Django's stubbing isn't great, there's a lot of polymorphism in the framework (and in DRF). You actually have to change your code rather than just sprinkling annotations in some of places to stop getting 'Any' types.

With the numeric stuff, it's even worse though, for with something like e.g.:

    np.sum(X)
the appropriate type of X can be a python list, a numpy array of any numeric type and dimension, etc.

Nit upfront: Python is typed, just not statically typed.

What dynamically typed languages lack in compile-time safety, the programmer must make up using (automated) testing. With adequate tests, a python program doesn't break more than a Rust or Go program. It's just that people often regard testing as an annoying chore which is the first thing they skip when vibe coding (or "going fast and breaking things" which is then literally what happens).


"a python program doesn't break more than a Rust or Go program"

but it is tho, You literally can just give LLM to check LSP to analyze early it for you without write test to begin, Their LSP and Compiler is just that smart


Preference for typed languages may be the one and only good thing to come out of this bubble.

Curious, has it been proven that typed languages are easier for LLMs to work with as they dont have to infer types?

Do they infer anything? Correct me if I'm wrong but having the types right there in the source for training data just means more context.

I'm not aware of any rigorous study on it, but my personal anecdote is that I don't even bother with Claude Code or similar unless the language is Haskell, the deployment is Nix, the config is Dhall, and I did property tests. Once you set it up like that you just pour money in until its too much money or its stuck, and thats how far LLMs can go now.

I used to yell at Claude Code when it tried to con me with mocks to get the TODO scratched off, now I laugh at the little bastard when it tries to pull a fast one on -Werror.

Nice try Claude Code, but around here we come to work or we call in sick, so what's it going to be?


There are researches backing some sort of "typed language is better for LLM". Like https://arxiv.org/abs/2504.09246, Type-Constrained Code Generation with Language Models, where LLM's output is constrainted by type checkers.

Also https://arxiv.org/abs/2406.03283, Enhancing Repository-Level Code Generation with Integrated Contextual Information, uses staic analyzers to produce prompts with more context info.

Yet, the argument does directly translate to the conclusion that typed language is rigorously better for LLM without external tools. However, typed language and its static analysis information do seem to help LLM.


Dynamically typed languages are far from "untyped". Though they may well require more effort to analyze from scratch without making assumptions, there is nothing inherently preventing type-constrained code generation of the kind the first paper proposes even without static typing.

A system doing type-constrained code-generation can certainly implement its own static type system by tracking a type for variables it uses and ensuring those constraints are maintained without actually emitting the type checks and annotations.

Similarly, static analyzers can be - and have been - applied to dynamically typed languages, though if these projects have been written using typical patterns of dynamic languages the types can get very complex, so this tends to work best with code-bases written for it.


this is just the kind of sass I needed today. cheers!

I reached the same conclusion. But unfortunately LLMs are not that good at C#.

What did the LLMs get wrong most frequently when working with C#?

They write code that don't build in the first or second shot. With Claude Code I gave it instructions to "fix tests" and it became so frustrated with them it started to "rm" the files lol

Providing agents with relevant documentation including code samples and API references should help a lot in your scenario. Relevant documentation helps a lot when you're working with obscure languages or libraries.

Everything said is true without AI as well, at least for me. I don't hate Python, and I like it for very small scripts, but for large programs the lack of static type makes it much to brittle IMO. Static typing gives the confidence that not every single line needs testing, which reduces friction during the lifecycle of the code.

They are also better suited for being ported to other languages, also unsurprisingly

This generalises to “Agents respond well to red/green feedback loops”.

Try PURE functionnal programming, it's even better ;)

try not vibe coding. it's even better

I've had really good experiences with claude code + rust

*Typed languages are better

Fixed it


I can say with 100% certainty that they all stink at Rust. It’s laughably bad. Python, on the other hand, is surprisingly good.

I scraped every comment on HN that discussed using Rust with LLMs and about half gave positive feedback, half negative feedback.

Can you explain more why you've arrived at this opinion?


With Python, it scales better if the Python is well-typed, not so much otherwise.

It's the best at Go imho since it has enforced types and a garbage collector.


I don't know what vibecoding is, and at this point I'm too afraid to ask.

I wouldn't worry too much, no-one seems to be able to agree what it means anyway.

Depending on who you speak to it can be anything from coding only by describing the general idea of what you want, to just being another term for LLM assisted programming.


The strict original definition of vibe coding is it is LLM writing code with the programmer never caring about the code, only caring about the code's runtime output. It is easily the worst way to use LLMs for code, and I think even coining the term was a highly irresponsible and society-damaging move by Karpathy, making me lose much respect for him. This coined definition was taken literally by managers to fire workers.

In truth, for LLM generated code to be maintainable and scalable, it first needs to be speced-out super well by the engineer in collaboration with the LLM, and then the generated code must also be reviewed line-by-line by the engineer.

There is no room for vibe coding in making things that last and don't immediately get hacked.


It’s fine to not know what it is, but what is the rationale for commenting that you don’t know? Why not just look it up? Or don’t, as you’re too afraid to ask.

Here's the tweet where Karpathy coined the term.

tldr; fast throwaway code from a LLM, where the human is just looking at the results and not trying to make maintainable code.

> There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like "decrease the padding on the sidebar by half" because I'm too lazy to find it. I "Accept All" always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away. It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.


Vibecoding is bad coding. Always. Even if I take the headline as correct, so what? It's still crap code that will collapse into an unmaintainable mess sooner rather than later.

> Because of this, I predict a decrease in Python adoption in companies, specifically for production deployments, even though I like it so much.

Most definitely not going to happen. Python is the language of the AI age and lot of ML/AI libraries do their reference or first release in Python.


Curious if anyone has done experiments where they compare output of Python vs annotated Python. Do type hints help LLMs?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: