The programming world is like this too, only worse. Programming techniques transfer between languages, but nobody seems to realize that. Instead, you see each new language community "discover" something every other language community had known for years. "Those languages suck, so we ignore everything they do."
It is depressing, and I have nothing more to say about this.
In programming, the question of whether a C++ programmer can walk in and do Java is still open. People may claim it's not possible or they may give the person a chance. In mathematics, as far as I understand the current situation, there is almost never any debate. Once you are an algebraic geometer or whatever, you aren't going to walk into another subfield and start working except in very exceptional cases. The relative openness of programming, in fact, is why I switched from math to programming.
Unless you're dealing with business executive types (that wouldn't notice if you were lying anyway), most of the time, to most of the people, a Java programmer and a C programmer and a Ruby programmer are all really just programmers.
They may have different ways of dealing with problems (the C programmer will lie awake trying to remember if he dropped a free(), while the Ruby programmer will stay up nights trying to figure out how to stop copying that list O(n) times), but the point is that they're still dealing with the same problems (it's always memory, isn't it?). The techniques may be different, but the questions and basic concepts never change.
In math, this is far from the case. In geometry, you never have to deal with infinity the same way you do in set theory. In fact, you don't even have to understand the idea of infinity the same way as a set theorist, and because of this, you _can't_ become a set theorist (unless you want to go back to undergrad and disappoint your parents _again_). It's like saying that a Java programmer doesn't need to understand that memory exists. Maybe they don't have to directly allocate and free it, but they still need to know how much they have and what happens when they use too much of it, and they can certainly recognize the same issues in any other language, even if they don't know how to fix them.
Mathematicians just don't have the same amount of common ground, and it's not even that they can't (because there's too much or something), it's that they don't really even want to (because they see it as boring or a waste of time (and in the case of analysis, they'd be right ;-)).
As a computer science professor, I have to say that this misreads the article.
Oh sure, computer science, and especially certain subfields like software engineering and languages, reinvent the wheel all the time. A new fad comes in and its proponents are too egotistical, or too lazy, to be bothered realizing it's the same as an old fad. This happens largely because the literature of computer science is exceptionally broad and not very deep. But this is hardly special to CS: it's common practice in engineering. More to the point, it doesn't have anything to do with what the author was on about.
What the author was talking about is the tendency, in mathematics, for the entire field to become balkanized into small groups with little interdisciplinary crosstalk and a disturbing degree of inbreeding. Many departments consist of specialists who cannot, or will not, talk to one another about their work. Sometimes no one understands what anyone else is doing or talking about. That's the claim anyway, and I have no reason to doubt it from what I've seen myself.
Computer science is nothing like this. Even computer science theory is highly interdisciplinary, application-oriented, and accessible to the mainstream CS audience (or at least to CS academics). In my department, I can talk to every single faculty member, reasonably intelligently, about what they are doing and how it is relevant to others. I can read their papers and more or less understand them. And I'm not some kind of uberprofessor, quite to the contrary. It's just that the field isn't very deep yet.
I disagree. At least you can "understand" what someone has written in another language. In fact, at the end basically everyone is speaking the same language. My experience in math is that you become so specialized you literally have no idea what the problem domain the other person is working on even is. The only discipline I could compare this to may be theoretical computer science, but that's basically a closely related math variant.
Edit: Come to think of it, literature may work. Nowadays you don't become an expert in lit, you become an expert in Shakespeare or Proust or Dostoevsky. Sometimes you may even be an expert in a single work. I think the problem is mitigated a little bit by the very low ratio of jargon to information. Math, on the other hand, has an entire new language to learn in each new subfield.
The programming world is like this too, only worse
Much worse. I think the problem is magnified by our obsession with languages - overlapping subsets of syntax features that have highly intricate relationships with programming techniques (making certain techniques easier to implement, others-harder, regardless of problem domain).
There is also the language == word on your resume' issue. If something new and cool comes out, people will resist it because they will have to give up their "10 years of Foo experience" for "1 year of Bar experience". Sad but true. (I just say on my resume, "X years programming experience". Except I don't really know how to pick X, because I have been programming since I was 5 and writing useful programs since high school. Slightly different than going to a Java training class and showing up to work everyday for a few years...)
There's some overlap, as well as actor/message-passing concurrency (Erlang), dataflow (Prolog, Oz, etc.), constraint, vector-oriented (APL), etc. Probably a dozen more, depending on where you draw some fine lines. (See CTM for a good overview.) FWIW, Prolog is just as homoiconic as Lisp, and has compile and runtime macros.
Still, I was more wondering about teaching methodology, not enumerating paradigms themselves. What could be done to counteract the "standing on the toes of giants" effect?
If I understand correctly, I'd imagine teaching everyone to implement languages using a system like Ian Piumarta's COLA might do the trick. The point is to break open these black boxes of abstraction (even though black boxes are good sometimes).
It is depressing, and I have nothing more to say about this.