Hacker News new | past | comments | ask | show | jobs | submit login

The only point in here that's defensible is (5), and that's because it's so vague that it doesn't mean much.

(1) Split I and D memory aren't a silver bullet against memory corruption flaws; "Harvard" architecture machines have had remote code execution flaws. Attackers aren't writing directly to program text; they're writing to data structures that effect command & control (most famously the stack) inside the process. Over the past 10 years, randomization has been more effective than any explicit control over what can or can't be executed.

(2) Most modern memory corruption flaws don't involve someone's strcpy(). It's far more common to find math problems in the handling of counted data structures --- you know, like the ones Charlie Stross praises. Meanwhile, if you want "safe" string handling, you use a string library.

(3) We have pervasive TCP/IP encryption today. It does nothing to address computer integrity. Encryption only keeps attackers from talking to you if you can authenticate and authorize every connection. You talk to thousands of sites daily; authorizing them is untenable. Meanwhile, this business of "listeners promiscuously logging traffic and cracking it at leisure" (side note: nobody logs "promiscuously" anymore): it takes an awful lot of leisure to crack AES128, or even DES-EDE.

(4) "The world wide web" has problems, but it's unclear that Stross can actually point them out. Turning Javascript on is unlikely to make you "some script kid's bitch" (turning Java on is another story). People turn off Javascript to avoid XSS, which makes one site a bitch, but not your whole browser.

(6) Bloody Microsoft took security seriously sooner than any other large vendor, as we're all discovering to our chagrin today. Nobody took software security seriously in 2000 except for software security practitioners. It's as true in 2001 as it was in 1995 that a skilled attacker could have popped any Sun machine, any sufficiently complex web app, any document viewer, any graphics format, or any SSL stack. So why blame Microsoft? They at least got a grip on the problem.

Let me be direct: things would have been just as bad if our CPUs had split I&D memory. They'd have been just as bad if everyone used Pascal strings. They'd have been just as bad if we had pervasive IPSEC from day #1. They might be better if we didn't have the WWW, but then, we wouldn't care how good or bad things were (like we didn't seem to care so much when people were owning up DMS500 switches at the phone company in 1991). Things would have been just as bad --- maybe, believe it or not, worse --- had Solaris been the dominant OS in 2000.

The problem is, nobody really knows how to make correct software. It's a core problem in Software Engineering and it's unsolved. Without correct software, you can't have secure software. Sorry.




Re. #1, Harvard architecture also does not solve problems with source code injection attacks, as we see with SQL queries and complex data structures that direct the flow of execution.

Harvard architecture is a pain in the ass anyway. When I designed a Harvard architecture chip (8051 derivative) into a product, I added an external address space mapper to turn it into a von Neumann architecture. The software folks had to write a custom file system/linker/loader and were thankful. The alternative was painfully reprogramming the chip with every recompile.

Re. #5, the large system developers I know love them their C# and sing the praises of Microsoft for making C/C++ avoidable. Their 500 kLOC distributed system integration project would be nearly untenable if they had to manually get every container iteration and bounds check exactly right. And Microsoft does not talk about it much, but I get the impression that they have been running heavy code reviews and static analyzers on their C/C++ software. At least there seems to have been a decline in exploitable bounds check errors.


Every piece of Microsoft software that has a customer-visible name --- and many that don't --- have been run through a gauntlet of static source code analyzers, have been reviewed by developers trained in secure C/C++ programming, have been "threat modeled" by internal and external teams to find design alternatives that reduce exposure to threats, and have been subjected to multiple external pen tests, often on the dot release.

The result is demonstrably imperfect, which just underscores the point that we don't know how to produce secure software "in the large".

Here's a shorthand: if Daniel J. Bernstein can't get it right, it's absurd to suggest that any software company's going to do any better.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: