Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Leslie Lamport said that he invented TLA+ so people could "think above the code".

It was meant as a tool for people to improve their thinking and description of systems.

LLM generation of TLA+ code is just intellectual masterbation.

It may get the work done for your boss. But you intellect will still remain bald — in which case you are better off not writing TLA+ at all.




> [TLA+] was meant as a tool for people to improve their thinking and description of systems.

Why the speciesism? Why couldn't LLMs use TLA+ by translating a natural-language request into a TLA+ model and then checking it in TLA+?


TLA+ was invented in the first place because we Leslie Lamport thought natural language was a dubious tool for "specifying systems".

Yes an LLM may generate the TLA+ code even correctly, but model checking is not the end goal of TLA+

TLA+ plus is written to fully under how a system works at an abstract level.

Anyways, I guess you could just read the LLM generated TLA+ code. That would help you understand the abstraction of the system — but is the LLMs abstraction equal to your abstraction.

But vibe coded TLA+ sounds extremely dangerous especially in mission critical stuff where its required like Smart Contracts, Pacemakers, Aircraft software etc


Not the OP, but I would rather give a formal specification of my system to an AI and have it generate the code.

I believe the point is it's easier for a human to verify a system's correctness as expressed in TLA+ and verify code correctly matches the system than it is to correctly verify the entire code as a system at once.

Then, if my model of the system is flawed, TLA+ will tell me.

I'm an AI bull so if I give the LLM a natural language description, I'd like the LLM to explain the model instead of just writing the TLA+ code.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: