I have a hard time that a person living in the same world as me can reach such wrong conclusions.
> If a piece of code is buggy, how do you fix it? You add debug statements and observe the output, not construct a formal semantics and then prove the relevant properties.
Where does the author see the methodology of writing formal proofs used to solve problems in the real world, exactly?
What I see is the first methodology ("debug statements") applied in every field of human practice with the sole exception of mathematics: you make some observations, you create an informal explanation for the observations that's the most likely (based on previous experience and knowledge of the failing system) and you either do more observations to verify your explanation of just try to fix it. If it doesn't work you try something else.
That's what medical doctors, mechanics, plumbers, police investigators, etc.
> Even the academics recognized the importance of “medieval practica; handbooks listing disorders from head to toe with a description of symptoms and treatment.”
And if my friends that study medicine and veterinary are to be believed those things are still alive and kicking.
Now, I'm not saying that medicine is still the same as it was in the middle ages (BTW 1500s is not the middle ages, the author gets that wrong too) or that computer science will never make any progress and there is no space at all for formal methods, however formal proofs is not the way most science and technology progresses, stating that is simply historical falsification.
There is an old school of thought in computer science academia that worships mathematics or actually, an idealized version of how mathematics works. It is not realistic nor applicable to the reality of computer science.
I just realized that someone may be upset that I didn't give any motivation for the last statement. There are four reasons why formal proofs do not really work for computer programming:
1) the fact that you wrote a formal proof doesn't mean that you proved something, proofs are as easy to get wrong as writing the program itself
2) the reason why proofs work form mathematics is that there's other people interested in reading through your proof. No one is interested in reading my proof that my code builds a valid SQL query string to retrieve some data from my own database. Even assuming that I can publish it without revealing too much about my proprietary system.
3) to prove a system correct you first need a formal specification for the system. Even assuming that you know exactly what you want the system to do, you have to get the formal specification correct, and that's as easy to get wrong as the code is. Then you go on to prove that the system is correct and it isn't. Did you get the specification wrong, the proof wrong or the code wrong?
``I have a problem, you decide to use a formal system. Now you have three problems.''
4) At this point people propose we just write the formal specification and let a magic system write the code. This system doesn't exist and we have 60 years of experience writing code that works, 0 years of experience writing formal specifications that work.
Unfortunately it's not freely available, but here is one paragraph from the introduction:
"Here, we report on the full formal verification of a critical system from a high-level model down to very low-level C code. We do not pretend that this solves all of the software complexity or error problems. We do think that our approach will work for similar systems. The main message we wish to convey is that a formally verified commercial-grade, general-purpose microkernel now exists, and that formal verification is possible and feasible on code sizes of about 10,000 lines of C. It is not cheap; we spent significant effort on the verification, but it appears cost-effective and more affordable than other methods that achieve lower degrees of trustworthiness."
It was an interesting article and the process they use is not typical. If I remember correctly, they use theorem proving tools, prototype in haskell, and ultimately convert that to C. They claim that Haskell seemed to save them time and their man-hours ultimately were similar to other more traditional micro-kernel efforts.
Edit: That said, if it is indeed practical to do this, it is only on a small subset of programming work, at least at this time.
> If a piece of code is buggy, how do you fix it? You add debug statements and observe the output, not construct a formal semantics and then prove the relevant properties.
Where does the author see the methodology of writing formal proofs used to solve problems in the real world, exactly? What I see is the first methodology ("debug statements") applied in every field of human practice with the sole exception of mathematics: you make some observations, you create an informal explanation for the observations that's the most likely (based on previous experience and knowledge of the failing system) and you either do more observations to verify your explanation of just try to fix it. If it doesn't work you try something else.
That's what medical doctors, mechanics, plumbers, police investigators, etc.
> Even the academics recognized the importance of “medieval practica; handbooks listing disorders from head to toe with a description of symptoms and treatment.”
And if my friends that study medicine and veterinary are to be believed those things are still alive and kicking.
Now, I'm not saying that medicine is still the same as it was in the middle ages (BTW 1500s is not the middle ages, the author gets that wrong too) or that computer science will never make any progress and there is no space at all for formal methods, however formal proofs is not the way most science and technology progresses, stating that is simply historical falsification.
There is an old school of thought in computer science academia that worships mathematics or actually, an idealized version of how mathematics works. It is not realistic nor applicable to the reality of computer science.