At schools with separate "pure" and "applied" math departments, there's sort of a running joke in the pure-math departments that they feel offended if someone has accidentally managed to find their work useful, because it means they've, quite to their chagrin, advanced the agenda of the rival applied-math department (or at least, that they weren't "pure" enough). But of course that's a self-deprecating joke more than a statement of what theoreticians really think.
I have said that quote a lot in jest. :) But I always remind myself that it is based on the problem that there is actually very little theory going on -- that the developer is doing a lot of random fuddling around with little understanding, little theory, until some piece of code produces a desired behavior. The developer may decorate his muddling with tests and wordy comments, but it is still muddling around. If the developer understood his system, then the little mysteries that break theory would not be so magical anymore, and he can move on to keeping his theory on par with his practice.
Countless times over the years I have seen areas where Academia is decades ahead of us so-called industry experts. What is "new", "shiny", and posted all over the place online today is built of theory produced 20+ years ago. In practice, we have gotten our tasks done without some of this knowledge, but theoretically, we have, over time in our ignorance, cost our employers millions of dollars in hours dedicated to bug-fixing and features that could not meet deadlines. It is a hard pill to swallow. The silver lining is that we can recognize this and work to grow, to learn that missed theory, rather than fall into the trap of complacency.
Today's fad is functional programming. In the coming years, we will relearn the weaknesses of functional programming, why OOP was the next stage of evolution. Hopefully we will have learned not to fall into extremist camps again (the notion of a "functional vs OOP" conflict still strikes me as hilarious misunderstanding).
Out of curiosity, could you share these weaknesses of functional programming and how OOP solves them? Or at least point me to some material that discusses them? I would be very interested. I have programmed in OOP for over a decade and just picked up functional programming in the last 2 years so do not have enough experience with FP to make any definite conclusions yet (although I am really liking what I see so far). Thanks.
I wish I had a good handle on that. There is much to functional programming I need to digest yet. But the hints are there (I noticed this in the timeline of when these various concepts were played with and integrated into standard specs), and in my own programming, I have learned to use the hybrid effectively. What follows is purely my current opinion, and it should include some healthy ignorance. I am but an egg.
If I were to put a finger on it, I would say that the reason OOP arose was that there was general recognition of the need for a rich type system. I got a hint of this in the first couple parts of SICP. Haskell incorporates a kind of type system that is, itself, pretty interesting. CLOS/Moose makes C++ and Java feel antiquated.
In my experience, a lot of us who learned OOP via C++, Pascal, Java, Ada, and the like generally missed the point: type systems. We tend to treat classes like collections of the language primitives as opposed to combinations of data structures. Specifically, we make do with primitive data types and neglect to constrain them as a type. And "everything must be an object" is an extremist mantra with little value; given that your process image must have an entry point, purely a code-flow issue, was there really a point to the illusion of wrapping the entrypoint main() function in a class (I am looking at you, Java)?
Here is an OOP type example I see in the wild: an ID that is a string of four characters, a dash, and three digits tends to be stored as a string. This is incorrect; that string is not like other strings, and this is a blind faith that somebody is not going to incorrectly use that string. Wrap that in a type that specifically checks that the ID fits that pattern. It is easy to fall into this kind of mistake in the database management world, too, where an ORM pulls in a VARCHAR2 and does not translate values of this type into an appropriate type in the system, perhaps a type with a restricted domain.
Another example: I have learned to always question if my method really needs to accept an "int". This is a smell. Could my value be negative? What is the real min/max range of the value? Could the value have a magic value, such as "Number Not Given" (or NULL)?
Functional programming helps us with code flow. OOP helps us carefully classify the data.
Most mainstream OO languages with a type system to speak of actually get in the way of correctly classifying data by confusing the separate issues of reusing implementation artefacts (aka subclassing) and classifying data into a hierarchy of concepts (aka subtyping). The only widely used OO language (for sufficiently narrow values of wide and wide values of OO) to get that right used to be Objective Caml, and recently its stepchildren F# and scala.
So it is actually FP that helps you with the classification.
This is a very interesting point and should be highlighted. You said implementation artifacts (especially in reference to reducing code duplication), and for clarity, I think you are referring to the definition of operators on data (class methods, friend methods, and so on). I agree with you that subclassing (for the purpose of reusing behavior), traits (for adding behavior), and the like can be confused with classification to such an extent that modern designs tend to depart from type systems and be used for mere code organization.
"was there really a point to the illusion of wrapping the entrypoint main() function in a class (I am looking at you, Java)?"
Far be it for me to defend Java (I hate the damn thing), but: main is just a function in a class. The class is the entry point, as specified in the command line; main is just what the OS looks for, by convention. You could have a "main" in each class, but only the one in the specified class will be the entry point.
The way of the theorist is to tell any non-theorist that the non-theorist is wrong, then leave without any explanation. Or, simply hand-wave the explanation away, claiming it as "too complex" too fully understand without years of rigorous training.
Of course I jest. :)
You have a point, but it's not all as one sided as you say. If you never grubby your hands in the practical, you won't have practical limitations and insights to help you on your way.
I remember a long time ago I was watching an online forum on riddles. One of the 'riddles' was how many cigarettes (read as: finite regular cylinders) can you arrange such that each one is touching each other one. I watched the thread go for three days as various theorists claim a maximum number according to mathematical theorem A or B or C. It started out with a max number of 3 and took three days of impassioned debate to work up to 6, with theories floating in and out of favour - and each one claiming to be 'the absolute ceiling limit'.
I then took a matchbox down the pub and asked the same task of my drunk friends (telling them to ignore the bulbous heads/square cross-section). Within five minutes, all of them, even the utterly non-scientific ones, had found a solution for six. Most of them found it within 1-2 minutes. Drunk, untrained, undiscplined folks that actually physically played with the items beat out impassioned, educated enthusiasts.
It highlighted the issue for me that theory is all well and good, but it's not useful in a vacuum. Theory and practice need each other to be efficient.
Similarly I went back to visit my old university. I saw a guy there 12 years into his PhD... and still clueless about the practicalities of what he was recording. He'd spent his whole career in the theory of it and had no idea of the realities of recording his subject, something that 3-6 months in industry would have given him in spades.
So no, it's not as clear cut as 'theory done cleanly in practice would win', as the real world is grubbier and has more edge cases. Theory and practice need each other for efficiency. Thinking of it another way - you don't need to test your backups... in theory... :)
Incidentally, on the cigarette question, I've seen a solution for 7, but I don't know of any higher. The guy who showed the solution for 7 was also the first one to say 'this is a solution, but there may be a higher ceiling out there'.
On the one hand, playing like that can give you insight. On the other hand, it can make you lose lots of time chasing an almost solution. Are you sure that arrangement of 6 was really a solution and did not require some minute bending or deformation?
"re you sure that arrangement of 6 was really a solution and did not require some minute bending or deformation?"
It's a good question, but then, it is also a perfect example of why playing around with the real object is sometimes better than applying a theoretical analysis. The question was about cigarettes, and cigarettes do bend / deform. That means that any analysis will need to take this characteristic into account. The problem is, we don't necessarily know all of the factors that need to be modelled to do a correct analysis, which is why empirical testing is still an important technique for problem solving.
Absolutely - which is why I say they need each other. They need to be merged intelligently, and purists of either camp should be taken with a grain of salt.