Or maybe it's "where we went right". Let's imagine a world where TCP/IP was encrypted. What consequences would this have? 1: it would have spread a lot slower. Effective encryption was VERY expensive 30 years ago. 2: The powers in charge would have been much less likely to let commercial interests and non-research institutions connect.
The most likely possibility in that environment? It may well have caused AOL to win. Remember, network effects are hugely important. Sure, the digerati would have accounts on compuserve or the Well, but they would have also have had an AOL account because everybody was on AOL.
Exactly. Von Neumann architecture, null string termination in C and lack of encryption in TCP/IP are all arguably the things we got spectacularly right. All these three are efficiency + simplicity vs features + complexity trade offs. Imagine how much extra energy and latency and points of failure would be introduced into the system if almost every network device was required to encrypt/decrypt every packet. Naughty things like error correction and oh a zillion tricks our everyday game programmer is using would be impossible.
Bull. It's not about technology, it's about people.
Back in the 1960's, people had a fear that there was going to be this one big mainframe, attended to by a bunch of priests, that would rule the world. (See "Colossus, The Forbin Project")
Just a decade later, in Don Parker's 1976 book, "Crime By Computer",
we see that the computer crime landscape is substantially the same as it is today. We see embezzlement, computers being used to create thousands of false insurance policies, data theft through timesharing terminals, physical attacks on computers, and concerns about privacy. The only thing that's missing is phishing... And this is just before microcomputers hit the market.
By early 1980's the "Cyberpunk" genre is established in science fiction and Niel Young sang "Computer Cowboy" on his album trans... We then knew the threat of computers was anarchy, not total government control.
The fact is, evil is in the heart of man. People are going to use whatever technology is available to do what they're going to do. Criminals use cars, air travel, and telephones every day. We can certainly close off certain avenues of technological attack, but as long as there is a motive, people are going find the opportunity to commit crimes.
Only in the heart of those that commit crimes. And we try to design our society in such a way that we reduce the crime. And the internet is now part of our society, which is very technological in nature.
So we have to design the internet in such a way that we take in to account the reality that some people are not nice.
And that makes it about technology, not about people.
According to one estimate pushed by the FBI in 2006, computer crime costs US businesses $67 billion a year. And identity fraud in the US allegedly hit $52.6Bn in 2004.
Even allowing for self-serving reporting (the FBI would obviously find it useful to inflate the threat of crime, if only to justify their budget requests), that's a lot of money being pumped down a rat-hole. Extrapolate it worldwide and the figures are horrendous — probably nearer to $300Bn a year. To put it in perspective, it's like the combined revenue (not profits; gross turnover) of Intel, Microsoft, Apple, and IBM — and probably a few left-overs like HP and Dell — being lost due to deliberate criminal activity.
I call bullshit. These numbers are, imho, just as made up as the RIAA's "losses to piracy" numbers. I don't believe them, not even for one second. The real numbers could be as far as 3 or 4 orders of magnitude smaller
The real numbers are no doubt N orders of magnitude smaller, but 3-4 sounds high. Could we lose billions of dollars a year to crime abetted by software insecurity? Yes.
At a first pass, remember that every piece of hardware and software that every company buys to address these problems is part of the cost. Antivirus alone gets us over a billion.
C doesn't really have a string type, it just has something called pointer to a character, and another thing called 'array of characters'. The standard library is what implements the string functions, not the C language, and the standard library was not written in such a way that the guts of the implementation were hidden from view so it is nearly impossible to fix after the fact. The few string routines that are now 'overwrite safe' notwithstanding, the 'old' stuff is still in use and plenty of new code is still being produced using these unsafe functions. Nul terminated strings are a convention, not a language implementation detail. So the blame does not really go to the C language but to the library implementation.
This is just message board geekery. Nobody could reasonably argue that the requirement is to "segfault"; C runs in many places where there's no such concept. Jacques point was that C string literals are immutable. Stop bickering with him; he's right.
$ cat immutable.c
#include <stdio.h>
int main()
{
char * s = "abc";
*s = 'c';
printf("%s\n", s);
return 0;
}
$ cc -o immutable immutable.c
$ ./immutable
cbc
EDIT: I can't reply to you, so I have to take this route.
I also think that this behavior is not desirable. However,
I do not see the passage in the the standard that forbids
writable strings. On the contrary, in Annex J writable
string literals are explicitly allowed as an extension,
and I don't see any indication that the presence of
writable strings make an implementation non-conforming.
Perhaps you could point to the passage in the standard
that requires string literals to be immutable (your words,
not mine) or that requires a compiler to issue a warning
if a string literal is modified.
If you still wonder why I'm arguing this: It is not for
the sake of nitpicking, but to support kabdib's post.
That Sun's CC does something goofy here has nothing to do with the argument. Code written to depend on that weird behavior is unreasonable. Why are you arguing this point? I don't get what you think you're proving here.
There are plenty of platforms that C runs on that are incapable of making anything immutable, including program text itself. That doesn't make C strings less immutable. Why isn't your compiler yelling at you for doing that? I think it's broken!
These all seem pointless next to the inevitable social engineering that led John in accounting to give that Excel spreadsheet of bank info to "Dan" from an "outside auditing firm". PEBKAC!!
The only point in here that's defensible is (5), and that's because it's so vague that it doesn't mean much.
(1) Split I and D memory aren't a silver bullet against memory corruption flaws; "Harvard" architecture machines have had remote code execution flaws. Attackers aren't writing directly to program text; they're writing to data structures that effect command & control (most famously the stack) inside the process. Over the past 10 years, randomization has been more effective than any explicit control over what can or can't be executed.
(2) Most modern memory corruption flaws don't involve someone's strcpy(). It's far more common to find math problems in the handling of counted data structures --- you know, like the ones Charlie Stross praises. Meanwhile, if you want "safe" string handling, you use a string library.
(3) We have pervasive TCP/IP encryption today. It does nothing to address computer integrity. Encryption only keeps attackers from talking to you if you can authenticate and authorize every connection. You talk to thousands of sites daily; authorizing them is untenable. Meanwhile, this business of "listeners promiscuously logging traffic and cracking it at leisure" (side note: nobody logs "promiscuously" anymore): it takes an awful lot of leisure to crack AES128, or even DES-EDE.
(4) "The world wide web" has problems, but it's unclear that Stross can actually point them out. Turning Javascript on is unlikely to make you "some script kid's bitch" (turning Java on is another story). People turn off Javascript to avoid XSS, which makes one site a bitch, but not your whole browser.
(6) Bloody Microsoft took security seriously sooner than any other large vendor, as we're all discovering to our chagrin today. Nobody took software security seriously in 2000 except for software security practitioners. It's as true in 2001 as it was in 1995 that a skilled attacker could have popped any Sun machine, any sufficiently complex web app, any document viewer, any graphics format, or any SSL stack. So why blame Microsoft? They at least got a grip on the problem.
Let me be direct: things would have been just as bad if our CPUs had split I&D memory. They'd have been just as bad if everyone used Pascal strings. They'd have been just as bad if we had pervasive IPSEC from day #1. They might be better if we didn't have the WWW, but then, we wouldn't care how good or bad things were (like we didn't seem to care so much when people were owning up DMS500 switches at the phone company in 1991). Things would have been just as bad --- maybe, believe it or not, worse --- had Solaris been the dominant OS in 2000.
The problem is, nobody really knows how to make correct software. It's a core problem in Software Engineering and it's unsolved. Without correct software, you can't have secure software. Sorry.
Re. #1, Harvard architecture also does not solve problems with source code injection attacks, as we see with SQL queries and complex data structures that direct the flow of execution.
Harvard architecture is a pain in the ass anyway. When I designed a Harvard architecture chip (8051 derivative) into a product, I added an external address space mapper to turn it into a von Neumann architecture. The software folks had to write a custom file system/linker/loader and were thankful. The alternative was painfully reprogramming the chip with every recompile.
Re. #5, the large system developers I know love them their C# and sing the praises of Microsoft for making C/C++ avoidable. Their 500 kLOC distributed system integration project would be nearly untenable if they had to manually get every container iteration and bounds check exactly right. And Microsoft does not talk about it much, but I get the impression that they have been running heavy code reviews and static analyzers on their C/C++ software. At least there seems to have been a decline in exploitable bounds check errors.
Every piece of Microsoft software that has a customer-visible name --- and many that don't --- have been run through a gauntlet of static source code analyzers, have been reviewed by developers trained in secure C/C++ programming, have been "threat modeled" by internal and external teams to find design alternatives that reduce exposure to threats, and have been subjected to multiple external pen tests, often on the dot release.
The result is demonstrably imperfect, which just underscores the point that we don't know how to produce secure software "in the large".
Here's a shorthand: if Daniel J. Bernstein can't get it right, it's absurd to suggest that any software company's going to do any better.
>And Microsoft, by dropping security support for older OSs, aren't helping the problem.
I disagree with this. The only way to get users off defunct OSes, and onto the newer, fundamentally more secure ones, is to stop supporting the old ones.
Microsoft spends serious dollars shoring up the security of Windows XP, a 10-year-old operating system. What kind of security support do you think OS X 10.0 customers get?
Stross' comment here is probably sparked by news stories about Microsoft dropping support for Win2k and XPSP2. But the solution to the XPSP2 problem is simply to upgrade to XPSP3. I'm not sure, but I think that if you had auto-update on (and you're crazy if you don't), you got that automatically.
If you're deployed permanently on Win2K, nothing Microsoft does is going to make you secure. Win2K lives in places where access to the same Ethernet collision domain is already game-over for an attacker.
It is simply unreasonable to suggest that Microsoft is making people less secure by encouraging them to get off operating systems that were first released to QA in 1998.
NB: as always, I have to post the disclaimer that while we've done work for MSFT in the past, our entire company is standardized on Apple hardware and Apple operating systems.
>User education, or the lack of it. (Clutches head.) I have seen a computer that is probably safe for most users; it's called an iPad, and it's the digital equivalent of a fascist police state: if you try to do anything dodgy, you'll find that it's either impossible or very difficult.
I'd be interested to see some security studies that compare banking behavior on iOS/Android vs. desktop. I don't access financial data on my phone, but my impression is that the way these devices are designed to be used without much training makes it a lot easier for social engineering to succeed, which is the primary attack vector anyway.
This is true; it's unfortunate that Stross is misinformed on this point. But it's also true that all the weaknesses of C are still present in C++. (I suppose std::string mitigates the char* damage in C somewhat, though I suspect there are rather too many APIs out there still using char*. And the lack of bounds checking is an ever-present problem...)
I don't buy that. Maybe it's true in the "reads Hacker News" monoculture (or the "comments on Hacker News" monoculture), but that sample is not representative of the wider programming world.
Apple's not going to spend several million dollars of engineering effort to create another C++ compiler if nobody's writing C++. Companies aren't going to spend money on sending representatives to C++0x standards meetings if their customers aren't using C++.
People are realizing that they don't have to program in C++ for many different areas. That doesn't mean that they're aren't still writing parts of their apps in C++. And there's a lot of existing C++ code out there that still has to be extended, debugged, and maintained.
I don't know. My company is on any given week reviewing code for as many as 15 different applications, for companies running the gamut from web startups to Win32 desktop software vendors to embedded software vendors.
I stand by my assertion. Most people aren't writing in C/C++ anymore. Even the embedded vendors find ways to avoid it.
Fair enough. Perhaps all the C++ effort is simply a dying gasp from the collective industry.
In any event, we're talking to different sets of embedded vendors. The embedded vendors I talk to are still using C++ and so are their customers. Maybe I'm talking to the less profitable ones, though. ;)
I can't imagine how C++ standardization efforts could possibly correlate in any way with industry adoption. The two things are completely orthogonal.
C++ hasn't been eradicated, but --- even in some embedded environments --- it is no longer the go-to language (Java seems to have that honor now).
Bringing this back to Stross' point, though: ASCIIZ strings aren't more relevant to modern computer security because of C++. Obviously, C++ disfavors them (though most C++ devs I know still drop to charstars when std::string gets frustrating), but, more importantly, they have almost nothing to do with Java, C#, Python, or PHP string handling.
> I can't imagine how C++ standardization efforts could possibly correlate in any way with industry adoption. The two things are completely orthogonal.
I don't think they're orthogonal: standardization efforts don't happen until you have significant industry adoption of technology X from several vendors and they (and/or their customers) decide it'd better if their implementations could interoperate. Why do you think they're completely orthogonal?
It's possible, of course, that the C++0x (C++1x?) standardization round is taking place on the peak or the decreasing side of C++ adoption (due to the increased complexity of C++, users finding benefits to non-C++ languages, movement into areas where C++ doesn't help much, etc.). But I don't think that's the case; perhaps we're on a plateau of C++ adoption, if anything.
Nobody's arguing that C++ is a fringe language. Clearly it has huge adoption. But that has nothing to do with whether it's a popular choice for new software. Sorry, it simply isn't.
The point wouldn't make sense even if its premise was accurate. The JVM is also implemented in C. But you don't see a lot of ASCIIZ overflows in Java code. The C++ std::string class doesn't suffer from C's wild-west memory handling on cfront-style compilers.
The most likely possibility in that environment? It may well have caused AOL to win. Remember, network effects are hugely important. Sure, the digerati would have accounts on compuserve or the Well, but they would have also have had an AOL account because everybody was on AOL.
I shudder.