Using LLMs for formal specs / formal modeling makes a lot of sense to me. If an LLM can do the work of going from informal English-language specs to TLA+ / Dafny / etc, then it can hook into a very mature ecosystem of automated proof tools.
I'm picturing it something like this:
1. Human developer says, "if a user isn't authenticated, they shouldn't be able to place an order."
2. LLM takes this, and its knowledge of the codebase, and turns it into a formal spec -- like, "there is no code path where User.is_authenticated is false and Orders.place() is called."
3. Existing code analysis tools can confirm or find a counterexample.
A fascinating thought. But then who verifies that the TLA+ specification does indeed match the human specification?
I’m guessing using an LLM as a translator narrows the gap, and better LLMs will make it narrower eventually, but is there a way to quantify this? For example how would it compare to a human translating the spec into TLA+?
The usual way to check whether a definition is correct is to prove properties about it that you think should hold. TLA+ has good support for this, both with model checking as well as simple proofs.
A fair question! I'd say it's not that different from using an LLM to write regular code: who verifies that the code the LLM wrote is indeed what you meant?
TLA+ was invented in the first place because we Leslie Lamport thought natural language was a dubious tool for "specifying systems".
Yes an LLM may generate the TLA+ code even correctly, but model checking is not the end goal of TLA+
TLA+ plus is written to fully under how a system works at an abstract level.
Anyways, I guess you could just read the LLM generated TLA+ code. That would help you understand the abstraction of the system — but is the LLMs abstraction equal to your abstraction.
But vibe coded TLA+ sounds extremely dangerous especially in mission critical stuff where its required like Smart Contracts, Pacemakers, Aircraft software etc
Not the OP, but I would rather give a formal specification of my system to an AI and have it generate the code.
I believe the point is it's easier for a human to verify a system's correctness as expressed in TLA+ and verify code correctly matches the system than it is to correctly verify the entire code as a system at once.
Then, if my model of the system is flawed, TLA+ will tell me.
I'm an AI bull so if I give the LLM a natural language description, I'd like the LLM to explain the model instead of just writing the TLA+ code.
Using generative chatbots to write a formal spec is the most stupid idea ever. Specs are all about reasoning. You need to do the thinking to model the system in a very simplified manner. Formal methods and the generative BS are at the antipodes of reliability. This is an insult to reason. Please keep this nonsense away from the serious parts of CS.
I'm picturing it something like this:
1. Human developer says, "if a user isn't authenticated, they shouldn't be able to place an order."
2. LLM takes this, and its knowledge of the codebase, and turns it into a formal spec -- like, "there is no code path where User.is_authenticated is false and Orders.place() is called."
3. Existing code analysis tools can confirm or find a counterexample.