>>When I read articles complaining about OOP, I just can't relate at all.
Most problems start with OOP when people try to take the longest possible path to achieve a goal. Enterprisey code, like having to deal with thousands of classes(AbstractClassFactoryFactorySingletonDispatcherFacadeInitializer types), dependency injection, design pattern abuse. Then on top of this comes things like Spring framework etc. At that point in time you have like two problems to deal with, one is the problem itself, and second is the complexity of the language.
This phenomenon has remained in the OOP world, almost forever. Things like maven have helped a little. But complexity hell has been the mainstay of OOP world for almost decades now.
Sure you can write very readable OOP code. Every time I do that I see great discomfort on the face of Java programmers during code reviews. For example over all this years, I haven't met a single Java programmer who could explain why Beans are good, or even needed. So you have these religious rituals Java programmers do.
They feel like their job protection scheme is at risk.
I feel like this comment is exactly what the article refers to as the typical straw-man argument against OOP. Yes, a lot of bad enterprisey OOP code has been written in the last 20 or so years, but that does not mean things like design patterns, dependency injection and the Spring framework don't have their place. Don't throw the baby out with the bath water.
Yes, AbstractClassFactorySingletonWhatever classes do exist in the Spring framework, they are there to help abstract away the complexity and flexibility of the framework so you as the application programmer can have a simple, productive programming environment, and you can still reach into the complexity (and extend it) when you need to.
Beans, interface inheritance, implementation inheritance and dependency injection are the building blocks of a programming environment that allows me to be very productive, and still write maintainable, testable, extendable and configurable code.
> and still write maintainable, testable, extendable and configurable code.
"reusable"... reusable code? That was the original point of OOP.
After a few years being a believer n the late 90's as a C++ Ubermensch, I had the horrible realization neither I nor anyone else would reuse any of the classes I had given beautiful interfaces to, and carefully documented. Any time I spent planning for reuse I might as well have spent staring out the window.
Of course who needs to reuse your own classes when you have the huge canker Boost you can drag around with you from task to task.
When I was doing hardware in the 80 early 90's I came to the conclusion that the 'reusable code' fans were all missing a point.
You are either writing a library,a framework, or something like that. Or you are implementing some 'business logic'
The point of the former is reusability. The point of the latter is utterly not and you should not waste your company's time and money or worse let schedules slip because of it.
Some further thoughts over the years.
Reuseable code must have a public, well designed, sable, and documented API or it won't ever be reused. Be honest how many programs are going to reuse this code? Enough to justify all the above? Didn't think so.
One of the problems with reusable code is dependencies. OOP code bases tend to have more than the ordinary number of dependencies.
To practice OOP, its approach is every class should be a "library". Procedural code that doesn't have a class context is felt to be a shameful throwback to an earlier era degenerate C practices. From before some people feel they starting writing "good C++" apparently.
I think having done hardware first gives me a slightly different perspective. Hardware designs get reused a lot, but usually not as a drop in, it gets reworked at each reiteration. Either that are it's something you buy off the shelf in which case the design interface is absolutely stable. Think a uP chip or core. They'll fix bugs and shrink the die, but they never ever change the abstract design. You can still by 8051 cores and IC's and that design has not changed at all in 40 years.
Compare with business logic. The problem is the spec and use case keeps moving. Moving way too fast to be a good library or a framework.
I did not mention reusable anywhere in my comment. If you are writing code to be reusable, you are writing a library and in that case you better make sure that it is a separate project or module so no caller-specific details are leaking into your library.
>>> maintainable, testable, extendable and configurable
but not reusable. I'm not blaming you, or disagreeing... C++ didn't produce reusable results for me either.
I dunno what you were doing 20 years ago, but I was doing this... there wasn't any problem with the classes being unclean or the boundaries cut in the wrong place. It was simply I was almost never going to use those classes a second time. If they had been C functions, I would not have needed them a second time, either, almost always.
After a year or two I realized that being the case, the entire loving OO encapsulation of then was not simply worthless but an active waste of time. And I went back to C.
If you get value from other C++ things that make it worth paying the price, that's great. But file away for a possible future horrifying 3am realization, perhaps nothing is worth the price of a gyre like boost, and just writing it in high quality C may be a better answer.
Yeah I have a feeling that some companies don't trust their engineers at all and so they want them to use as many tools as possible to reduce the likelihood of mistakes.
That's probably what functional programming, 100% code coverage and code linting trends are really about; allowing companies to not have to trust their engineers. I think this is a futile effort. If you want better code, just hire better developers with more experience who can be trusted. If you are not able to identify and recruit such people, then you shouldn't be a manager.
Bad developers will find a way to write terrible code; in any language, any paradigm, with strict linting rules enforced and even with 100% test coverage.
It really bothers me that most companies feel perfectly fine trusting financiers, lawyers and accountants with extremely sensitive company secrets and business plans but they absolutely refuse to trust engineers with their own code.
Where are there company executives that actually know and care about these kinds of things?
Some of what you're talking about can be overdone by engineers (i.e. 100% line coverage), but a lot of what you're talking about are tools engineers have developed to make their lives easier and to improve their code quality. A linter checks for common mistakes, clarity problems, and can help ensure code is written in a consistent style which improves readability. Functional programming is a coding style that doesn't always mesh well with the languages it's tried out in, but when it does it can provide enormous benefits in local reasoning about code and I think a large part of why it's seeing a resurgence is the experience of engineers having to build large, complex systems that become baroque and difficult to reason about in OO/procedural style.
This idea that good engineers have perfect competency and write code that immune from the problems these tools help solve is absurd and totally disconnected from the reality of the challenges involved in building software. Even the best, and most respected engineers write code that is riddled with bugs, and there are CVEs to prove it.
Engineering tools and new paradigms weren't invented to add friction and police developers, they were invented to aid our very real, human cognitive limits in writing software. If anything some of these things make the process more enjoyable and far less error prone.
I think that a lot of the tools and processes that we use these days are counter-productive overall.
I think that a lot of rules enforced by code linters are rules that are good 90% of the time, but they're bad 10% of the time.
Having 100% test coverage is great to guarantee that the software behaves like it's supposed to but it's terrible when you want to change that behavior in the near future. Most systems should be built to handle change. Having 100% test coverage disincentivizes change; especially structural changes.
Also, more advanced project management tools like Jira don't actually add value to projects; they just give dumb executives the illusion of having more visibility into the project - Unfortunately, they cannot really know what's going on unless they understand the code.
I don't disagree that things like testing are overdone. I'm currently consulting on a project that was largely written by another consultant who was fanatical about TDD. Most of the code was very poorly structured, but had hundreds of brittle tests (where it wasn't exactly clear what was even being tested). Since this project is still pretty new, one of the first decisions we made was to disable the tests and do a major refactoring (then rewrite maybe half of the tests). As a result we've been much more productive and the app is if anything _more_ stable now and much easier to understand.
And I agree that project management tools (especially JIRA) are pretty frustrating and provide a dubious value proposition. Project management is going to need to have a reckoning some day w/ the fact that it's very difficult to glean any insight from estimates and velocity (and that very few places actually even measure these things in a consistent fashion). The only value to "agile" and the ecosystem surrounding it IMO is that it de-emphasized planning to some extent and pushed the importance of communication. These are ultimately human problems though, not ones that can be solved by technology. Also I think there's a laughably small amount of interest in giving software engineers the time to train and learn.
Software engineering as a discipline is still in the dark ages. We don't really have solid data on the impact of things like programming language choice, development methodologies, etc. Most of the conceived wisdom about these things is from people who are trying to sell their services as consultants and offer project management certifications. That's not to say that there is no value in tools, techniques, language features, etc. to software engineering, there's a huge value, but we need to better understand how contingent the advantages to these decisions are.
As always, I think the things en vogue in software engineering are somewhat of a wash. I'm very happy that statically typed functional programming languages have seen a resurgence because I think they offer an enormously better alternative to TDD-ing your project to death by allowing you to make error states unrepresentable and discouraging mutable state (also I think encoding invariants in types actually makes it faster to adapt to changing requirements, especially w/ the refactoring abilities in editors). On the other hand, there are lots of bad ideas about architecture and scaling today, particularly with the "microservice" trend and I think people poorly understand how these things were mostly created as a response to organizational growth, not per se in search of better performance or some kind of invariant best practices about architecture.
In any case, I think there is an inherent tension w/ management that has indirectly lead to some of these practices, but I would push back on the idea that we only adapt some of these to satisfy management. Tools that push you towards correct and safe code IMO make the job less anxiety-inducing and gives your code a chance to actually meet the expectations users invest in your project by using it. In my experience these are the kinds of things management couldn't care less about until it impacts their profitability.
Reusable, testable, extensible and configurable classes are the reason for bad enterprisey OOP code.
Testability with the poverty of tools available in languages like Java is the single biggest driver, and the alleged induced "improvements" in extensibility and factoring makes people feel happy about making all their objects fully configurable, all their dependencies pluggable and replaceable so they can be mocked or stubbed, with scarcely a thought for the costs of all this extra abstraction.
Extensible and configurable code is not an unalloyed virtue. For every configuration, there is a choice; for every extension, there is a design challenge. These things have costs. When your extension and dependency injection points only ever have a single concrete implementation outside of tests, they bake in assumptions on the other side of the wall and are not actually as extensible and configurable as you think. And your tests, especially if you use mocking, in all probability over-specify your code's behaviour making your tests more brittle and decreasing maintainability.
And parameterization that makes control flow dependent on data flow in a stateful way (i.e. not mere function composition) makes your code much harder for new people to understand. Instead of being able to use a code browser or simple search to navigate the code, they must mentally model the object graph at runtime to chase through virtual and interface method calls.
I think there are better ways to solve almost every problem OOP solves. OOP is reasonably OK at GUI components, that's probably its optimal fit. Outside of that, it's not too bad when modelling immutable data structures (emulating algebraic data types) or even mutable data structures with a coherent external API. It's much weaker - clumsy - at representing functional composition, with most OO programming languages slowly gaining lambdas with variable capture to overcome the syntactic overhead of using objects to represent closures. And it's pretty dreadful at procedural code, spawning all too common Verber objects that call into other Verber objects, usually through testable allegedly extensible indirections - the best rant I've read on this is https://steve-yegge.blogspot.com/2006/03/execution-in-kingdo... .
>>but that does not mean things like design patterns, dependency injection and the Spring framework don't have their place.
Part of the reason why Go has picked up so well, And languages Python and Perl still such widespread use is because many people aren't developing monstrous megalith applications like in the pre-2000's.
You needed all these big complexity management features because people were padding every tiny little feature callable from 'public static void main', then you see there are some very tiny far abstract patterns of code that overlap in such a giant megalith, and then to make use of that you unload the entire design pattern text book to make it happen.
This was totally unnecessary. In fact its perfectly ok to have 5 - 10 % duplicate code if it comes at the expense of simplicity and maintainability of the remainder 95% fo the code.
The overall rise of micro services architecture and those trends are only going to increase this way of developing software.
>>Yes, AbstractClassFactorySingletonWhatever classes do exist in the Spring framework, they are there to help abstract away the complexity and flexibility of the framework so you as the application programmer can have a simple, productive programming environment, and you can still reach into the complexity (and extend it) when you need to.
I'm not sure, but there are other language communities which solve the same problem without writing 30 classes just to do variable++
>>Beans, interface inheritance, implementation inheritance and dependency injection are the building blocks of a programming environment that allows me to be very productive, and still write maintainable, testable, extendable and configurable code.
Yet to see such an environment.
It feels like Java community programmers create tons of complexity and then tons of complex frameworks to solve that complexity that shouldn't even exist at the first place.
One argument that I often don't see mentioned when talking about OOP code, especially the admittedly awful enterprise type, is that this style of coding is happening for a reason.
I think the reason is this: you have a large sprawling codebase, consisting of tens of thousands of 'business rules' or more, and you have to find a way to allow cheap run of the mill programmers to make changes to those business rules in an approachable manner.
I don't think it's fair to evaluate enterprise OOP code based on the style you would prefer when undertaking a standalone project, even a large one, when one of the very problems that this enterprise style is trying to solve is how to have hundreds of fairly green developers work on the codebase.
Now, as an exercise imagine taking this social problem I've described above, and implementing it in your favourite programming style, be it lisp/scheme, functional programming, JS or whatever you prefer, and try to honestly imagine whether it would hold up any better under those conditions.
Sorry not quite, I believe that people use OOP in enterprise settings perhaps thinking:
* Encapsulation might make it safer for programmers to modify code without affecting other areas.
* Inheritance will make it easier to set standards for large numbers of high turnover programmers.
and that then it grows complex for various mostly social factors.
The testable hypothesis in this is that if you started with another programming model, and applied the same social pressures to it, it would end up just as ugly and complex.
These codebases have hordes of programmers banging away on them, with little regard for the bigger picture which makes the code ugly and inconsistent.[1]
Architects then react to this and attempt to enforce standards by introducing patterns and subclasses and facades etc, which is what makes it complex.
Basically, I'm saying that the main straw-man used to argue against OOP: the enterprise codebase is chosen for the wrong reasons.
Put any other programming style under the same pressures and you'll end up with a similar big ball of mud.
Most problems start with OOP when people try to take the longest possible path to achieve a goal. Enterprisey code, like having to deal with thousands of classes(AbstractClassFactoryFactorySingletonDispatcherFacadeInitializer types), dependency injection, design pattern abuse. Then on top of this comes things like Spring framework etc. At that point in time you have like two problems to deal with, one is the problem itself, and second is the complexity of the language.
This phenomenon has remained in the OOP world, almost forever. Things like maven have helped a little. But complexity hell has been the mainstay of OOP world for almost decades now.
Sure you can write very readable OOP code. Every time I do that I see great discomfort on the face of Java programmers during code reviews. For example over all this years, I haven't met a single Java programmer who could explain why Beans are good, or even needed. So you have these religious rituals Java programmers do.
They feel like their job protection scheme is at risk.