Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ah, but that's not my question! My question is whether the specifications we are interested in implementing in practice are hard in theory.


That's a good question and I don't think anyone has a definitive to that. I certainly don't. I think we should do empirical research and try to find out. What would be your guess and why?

I do have a some ideas, though. First, I think that the effort required for end-to-end proofs -- even for heavily simplified, relatively small programs -- does indicate that the problems are inherently hard because I don't think we're lacking some secret proof technique that would all of a sudden make things significantly easier. The difference between cases that are extremely hard (Goldbach's conjecture) and real world code doesn't seem big. Second, I think that empirical research might uncover something that can have a bigger impact. End-to-end proof is not a very important practical problem, because very few software projects ever need it or anything close to it. Improving correctness or reducing development costs (two sides of the same coin) for non high-assurance programs is much more important, and where the theory leaves much more wiggle room (because once you step away from actual watertight proof, basically anything is possible). I believe that empirical research can uncover certain common program patterns and bugs that can be analyzed/prevented, and that those may give a significant boost. But without empirical research, we won't even know which patterns/bugs to tackle.

I think that emphasis on deductive proofs is misplaced (in light of theory and practice) as it's hard to fine-tune[1], and indeed, the software verification community has been moving further and further away from that direction for a long while; since the seventies, I believe. It's the PL community that's decided this is interesting to them. Software verification is interested in many other approaches, like static analysis (that can verify some properties -- hopefully important -- some of the time), probabilistic methods, combination of formal methods and testing (like so called "concolic" testing) etc..

------

[1]: I'm referring to actual proofs with dependent types. Simpler type systems can be invaluable, if researchers only tried to figure out which properties are actually important. For example, Rust's borrow checker is extremely interesting, and tackles a class of bugs that's known to be both very harmful and easy to prevent mechanically. We should be looking for more of those. An example of what not to do if you want real-world impact, is the peculiar focus on effect systems. Other than shared state (which is a whole other matter), how did some PL researchers get it into their minds that effects are an important problem to tackle?


> That's a good question and I don't think anyone has a definitive to that. I certainly don't. I think we should do empirical research and try to find out. What would be your guess and why?

I have absolutely no idea. The entire focus of my software development philosophy for a decade or has been to make programs easier for humans to understand, not easier to write proofs about (although there's some overlap). I do wonder how big the weak but extremely useful class of properties (like Rust's memory safety) is.


This, I think, is a beautiful example of what empirical research can uncover: https://www.usenix.org/system/files/conference/osdi14/osdi14...

> almost all (92%) of the catastrophic system failures are the result of incorrect handling of non-fatal errors explicitly signaled in software.

The exception-handling code is there, it's just that no thought has been put into it. They say that their simple linting tool could have detected and prevented a good portion of those very costly bugs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: