> I think I would just be easier to hit compile on that spec
I have a hard time not being similarly skeptical about formal methods. They promise the ultimate holy grail of software development - release software with zero defects - but demand what appears to be years of study before you actually see results. On the one hand, there's lots of examples of techniques and tactics that promise amazing results if you "put in the work", even outside of software (like fitness and music) - but in over 30 years of software development practice, I've never met anybody who was applying formal methods at all, much less to achieve defect-free software. If it could deliver what it promised even with a "mere" five years of dedicated study, I'd expect it to be more widespread by now.
As someone who was working on this decades ago [1][2], here's a useful recap.
- Back then, CPU power for the prover was a big problem. That's been fixed.
We were too early in the days of a 1 MIPS VAX.
- Prover theory is much better. We were using the Oppen-Nelson prover, the original SAT system. Now that's routine technology.
- Loose languages are a problem. You have to nail down all "undefined behavior", and
either detect it and forbid it, or give it precise semantics.
- Integrate the verification annotation into the programming language.
Not as comments. It should be at least syntax and type checked when the
program is compiled, and it should use essentially the same syntax as the language to which it is attached. Otherwise the annotations get out of date and nobody uses them.
- Break big assertions into lots of little assertions. Don't AND them. This improves
the diagnostics.
- Debug the verification in the source language, by adding asserts. The
verification system usually treats asserts as a proof goal, and assumes they are true for
the code that follows. You narrow down the problem into a small area in that way.
If you need to prove something hard, get to the point where you have "assert(a); assert(b);", and need to prove that A implies B. Then you use an offline prover.
Don't work on the code problems in a prover directly.
- You're not done until you get 100%. If you write "assert(false);", there is only one
error but the verification is totally meaningless.
- Don't get carried away with the formalism. Verification systems tend to be built
by people who think formalism is cool. That is a negative for getting work done.
- Undecidability and the halting problem are not issues. If your program is anywhere
near undecidable, it's broken. Microsoft took the position with their Static Driver Verifier that if the verifier can't decide termination easily, it doesn't get to be a signed kernel driver.
- Some things are hard to specify, and some things aren't. A database is an example of
a complicated system that's not too hard to specify. The specification is a full table
search of giant arrays. The implementation doesn't do it that way, but it's supposed to
behave as if it does. The other extreme would be a GUI program.
Can a button be formally verified (or failed), if due to poor graphic design it may or may not look like a button?
Another "other extreme": create the source for a new database manager that formally matches the behavior of another database manager, that you only have object code for. :)
Years of study are definitely not required to get up and running! I attended a 3 day workshop for TLA+ and spent ~2 weeks modeling a new project that my team was working on. I found lots of potential edge cases that would result in data inconsistencies without even having to write tests or think about how to articulate the issue in the first place, as the model checker is able to output the exact program flow that results in the error.
You still need to make sure your implementation matches the spec, but that’s an easier task than squashing concurrency bugs, let alone figuring out how to repro in the first place! The hope is that by writing the spec you avoid a good chunk of errors from the start instead of encountering them one by one.
I have met tons of folks who have used this in practice. It tends more applied in domains where errors can be costly. E.g., this won't be that much useful in the JS/NodeJS.
You might already be using proven correct software somewhere in your stack without knowing it. You most certainly will be trusting your life with it flying Airbus.
I have a hard time not being similarly skeptical about formal methods. They promise the ultimate holy grail of software development - release software with zero defects - but demand what appears to be years of study before you actually see results. On the one hand, there's lots of examples of techniques and tactics that promise amazing results if you "put in the work", even outside of software (like fitness and music) - but in over 30 years of software development practice, I've never met anybody who was applying formal methods at all, much less to achieve defect-free software. If it could deliver what it promised even with a "mere" five years of dedicated study, I'd expect it to be more widespread by now.