Becoming a professional Haskell and Erlang developer really shifted my view on OOP (let OOP denote class based OOP as found in Java or C++). In my view, OOP is prove a poor model for computation, and the result has been that OO code is almost always significantly more complex and error prone than an equivalent computation written in a concurrent, functional, or structured paradigm. Recent trends in language design (see Rust, Go, Elixir) also seem to be abandoning OOP in favor of other models as well.
OOP provides competing ways of abstracting behavior that in Haskell we can model with type parameters and constraints. Objects are not truly encapsulated in the way Erlang processes are and a poor fit for SMP. Objects also pathologically hide data in attempt to manage mutability, making it impossible to reason about the memory layout of the program.
All in all, OOP is a toolkit for building bad abstractions: abstractions that do not easily model computation, that hide data, and has tended to create overly complex solutions to problems that are often full of errors that a language focused more on type expressivity could catch at compile time.
I don't understand what the problem with OOP is. My OOP code has always been clean and easy for myself and other people to read and modify.
When I read articles complaining about OOP, I just can't relate at all.
The idea of functional programming makes no sense to me except for writing very specific simple programs. For example, I like using some functional programming on the front end with VueJS but even there, I still allow mutations in certain parts of the code.
I like being able to store state in different instances and allow it to be mutated independently of each other.
I like it when related logic and state is kept close together in the code; it makes it easy to reason about different parts of the code independently.
On the other hand, with functional programming, it can be difficult to figure out where state comes from because there is very little abstraction to separate different parts of the code.
>>When I read articles complaining about OOP, I just can't relate at all.
Most problems start with OOP when people try to take the longest possible path to achieve a goal. Enterprisey code, like having to deal with thousands of classes(AbstractClassFactoryFactorySingletonDispatcherFacadeInitializer types), dependency injection, design pattern abuse. Then on top of this comes things like Spring framework etc. At that point in time you have like two problems to deal with, one is the problem itself, and second is the complexity of the language.
This phenomenon has remained in the OOP world, almost forever. Things like maven have helped a little. But complexity hell has been the mainstay of OOP world for almost decades now.
Sure you can write very readable OOP code. Every time I do that I see great discomfort on the face of Java programmers during code reviews. For example over all this years, I haven't met a single Java programmer who could explain why Beans are good, or even needed. So you have these religious rituals Java programmers do.
They feel like their job protection scheme is at risk.
I feel like this comment is exactly what the article refers to as the typical straw-man argument against OOP. Yes, a lot of bad enterprisey OOP code has been written in the last 20 or so years, but that does not mean things like design patterns, dependency injection and the Spring framework don't have their place. Don't throw the baby out with the bath water.
Yes, AbstractClassFactorySingletonWhatever classes do exist in the Spring framework, they are there to help abstract away the complexity and flexibility of the framework so you as the application programmer can have a simple, productive programming environment, and you can still reach into the complexity (and extend it) when you need to.
Beans, interface inheritance, implementation inheritance and dependency injection are the building blocks of a programming environment that allows me to be very productive, and still write maintainable, testable, extendable and configurable code.
> and still write maintainable, testable, extendable and configurable code.
"reusable"... reusable code? That was the original point of OOP.
After a few years being a believer n the late 90's as a C++ Ubermensch, I had the horrible realization neither I nor anyone else would reuse any of the classes I had given beautiful interfaces to, and carefully documented. Any time I spent planning for reuse I might as well have spent staring out the window.
Of course who needs to reuse your own classes when you have the huge canker Boost you can drag around with you from task to task.
When I was doing hardware in the 80 early 90's I came to the conclusion that the 'reusable code' fans were all missing a point.
You are either writing a library,a framework, or something like that. Or you are implementing some 'business logic'
The point of the former is reusability. The point of the latter is utterly not and you should not waste your company's time and money or worse let schedules slip because of it.
Some further thoughts over the years.
Reuseable code must have a public, well designed, sable, and documented API or it won't ever be reused. Be honest how many programs are going to reuse this code? Enough to justify all the above? Didn't think so.
One of the problems with reusable code is dependencies. OOP code bases tend to have more than the ordinary number of dependencies.
To practice OOP, its approach is every class should be a "library". Procedural code that doesn't have a class context is felt to be a shameful throwback to an earlier era degenerate C practices. From before some people feel they starting writing "good C++" apparently.
I think having done hardware first gives me a slightly different perspective. Hardware designs get reused a lot, but usually not as a drop in, it gets reworked at each reiteration. Either that are it's something you buy off the shelf in which case the design interface is absolutely stable. Think a uP chip or core. They'll fix bugs and shrink the die, but they never ever change the abstract design. You can still by 8051 cores and IC's and that design has not changed at all in 40 years.
Compare with business logic. The problem is the spec and use case keeps moving. Moving way too fast to be a good library or a framework.
I did not mention reusable anywhere in my comment. If you are writing code to be reusable, you are writing a library and in that case you better make sure that it is a separate project or module so no caller-specific details are leaking into your library.
>>> maintainable, testable, extendable and configurable
but not reusable. I'm not blaming you, or disagreeing... C++ didn't produce reusable results for me either.
I dunno what you were doing 20 years ago, but I was doing this... there wasn't any problem with the classes being unclean or the boundaries cut in the wrong place. It was simply I was almost never going to use those classes a second time. If they had been C functions, I would not have needed them a second time, either, almost always.
After a year or two I realized that being the case, the entire loving OO encapsulation of then was not simply worthless but an active waste of time. And I went back to C.
If you get value from other C++ things that make it worth paying the price, that's great. But file away for a possible future horrifying 3am realization, perhaps nothing is worth the price of a gyre like boost, and just writing it in high quality C may be a better answer.
Yeah I have a feeling that some companies don't trust their engineers at all and so they want them to use as many tools as possible to reduce the likelihood of mistakes.
That's probably what functional programming, 100% code coverage and code linting trends are really about; allowing companies to not have to trust their engineers. I think this is a futile effort. If you want better code, just hire better developers with more experience who can be trusted. If you are not able to identify and recruit such people, then you shouldn't be a manager.
Bad developers will find a way to write terrible code; in any language, any paradigm, with strict linting rules enforced and even with 100% test coverage.
It really bothers me that most companies feel perfectly fine trusting financiers, lawyers and accountants with extremely sensitive company secrets and business plans but they absolutely refuse to trust engineers with their own code.
Where are there company executives that actually know and care about these kinds of things?
Some of what you're talking about can be overdone by engineers (i.e. 100% line coverage), but a lot of what you're talking about are tools engineers have developed to make their lives easier and to improve their code quality. A linter checks for common mistakes, clarity problems, and can help ensure code is written in a consistent style which improves readability. Functional programming is a coding style that doesn't always mesh well with the languages it's tried out in, but when it does it can provide enormous benefits in local reasoning about code and I think a large part of why it's seeing a resurgence is the experience of engineers having to build large, complex systems that become baroque and difficult to reason about in OO/procedural style.
This idea that good engineers have perfect competency and write code that immune from the problems these tools help solve is absurd and totally disconnected from the reality of the challenges involved in building software. Even the best, and most respected engineers write code that is riddled with bugs, and there are CVEs to prove it.
Engineering tools and new paradigms weren't invented to add friction and police developers, they were invented to aid our very real, human cognitive limits in writing software. If anything some of these things make the process more enjoyable and far less error prone.
I think that a lot of the tools and processes that we use these days are counter-productive overall.
I think that a lot of rules enforced by code linters are rules that are good 90% of the time, but they're bad 10% of the time.
Having 100% test coverage is great to guarantee that the software behaves like it's supposed to but it's terrible when you want to change that behavior in the near future. Most systems should be built to handle change. Having 100% test coverage disincentivizes change; especially structural changes.
Also, more advanced project management tools like Jira don't actually add value to projects; they just give dumb executives the illusion of having more visibility into the project - Unfortunately, they cannot really know what's going on unless they understand the code.
I don't disagree that things like testing are overdone. I'm currently consulting on a project that was largely written by another consultant who was fanatical about TDD. Most of the code was very poorly structured, but had hundreds of brittle tests (where it wasn't exactly clear what was even being tested). Since this project is still pretty new, one of the first decisions we made was to disable the tests and do a major refactoring (then rewrite maybe half of the tests). As a result we've been much more productive and the app is if anything _more_ stable now and much easier to understand.
And I agree that project management tools (especially JIRA) are pretty frustrating and provide a dubious value proposition. Project management is going to need to have a reckoning some day w/ the fact that it's very difficult to glean any insight from estimates and velocity (and that very few places actually even measure these things in a consistent fashion). The only value to "agile" and the ecosystem surrounding it IMO is that it de-emphasized planning to some extent and pushed the importance of communication. These are ultimately human problems though, not ones that can be solved by technology. Also I think there's a laughably small amount of interest in giving software engineers the time to train and learn.
Software engineering as a discipline is still in the dark ages. We don't really have solid data on the impact of things like programming language choice, development methodologies, etc. Most of the conceived wisdom about these things is from people who are trying to sell their services as consultants and offer project management certifications. That's not to say that there is no value in tools, techniques, language features, etc. to software engineering, there's a huge value, but we need to better understand how contingent the advantages to these decisions are.
As always, I think the things en vogue in software engineering are somewhat of a wash. I'm very happy that statically typed functional programming languages have seen a resurgence because I think they offer an enormously better alternative to TDD-ing your project to death by allowing you to make error states unrepresentable and discouraging mutable state (also I think encoding invariants in types actually makes it faster to adapt to changing requirements, especially w/ the refactoring abilities in editors). On the other hand, there are lots of bad ideas about architecture and scaling today, particularly with the "microservice" trend and I think people poorly understand how these things were mostly created as a response to organizational growth, not per se in search of better performance or some kind of invariant best practices about architecture.
In any case, I think there is an inherent tension w/ management that has indirectly lead to some of these practices, but I would push back on the idea that we only adapt some of these to satisfy management. Tools that push you towards correct and safe code IMO make the job less anxiety-inducing and gives your code a chance to actually meet the expectations users invest in your project by using it. In my experience these are the kinds of things management couldn't care less about until it impacts their profitability.
Reusable, testable, extensible and configurable classes are the reason for bad enterprisey OOP code.
Testability with the poverty of tools available in languages like Java is the single biggest driver, and the alleged induced "improvements" in extensibility and factoring makes people feel happy about making all their objects fully configurable, all their dependencies pluggable and replaceable so they can be mocked or stubbed, with scarcely a thought for the costs of all this extra abstraction.
Extensible and configurable code is not an unalloyed virtue. For every configuration, there is a choice; for every extension, there is a design challenge. These things have costs. When your extension and dependency injection points only ever have a single concrete implementation outside of tests, they bake in assumptions on the other side of the wall and are not actually as extensible and configurable as you think. And your tests, especially if you use mocking, in all probability over-specify your code's behaviour making your tests more brittle and decreasing maintainability.
And parameterization that makes control flow dependent on data flow in a stateful way (i.e. not mere function composition) makes your code much harder for new people to understand. Instead of being able to use a code browser or simple search to navigate the code, they must mentally model the object graph at runtime to chase through virtual and interface method calls.
I think there are better ways to solve almost every problem OOP solves. OOP is reasonably OK at GUI components, that's probably its optimal fit. Outside of that, it's not too bad when modelling immutable data structures (emulating algebraic data types) or even mutable data structures with a coherent external API. It's much weaker - clumsy - at representing functional composition, with most OO programming languages slowly gaining lambdas with variable capture to overcome the syntactic overhead of using objects to represent closures. And it's pretty dreadful at procedural code, spawning all too common Verber objects that call into other Verber objects, usually through testable allegedly extensible indirections - the best rant I've read on this is https://steve-yegge.blogspot.com/2006/03/execution-in-kingdo... .
>>but that does not mean things like design patterns, dependency injection and the Spring framework don't have their place.
Part of the reason why Go has picked up so well, And languages Python and Perl still such widespread use is because many people aren't developing monstrous megalith applications like in the pre-2000's.
You needed all these big complexity management features because people were padding every tiny little feature callable from 'public static void main', then you see there are some very tiny far abstract patterns of code that overlap in such a giant megalith, and then to make use of that you unload the entire design pattern text book to make it happen.
This was totally unnecessary. In fact its perfectly ok to have 5 - 10 % duplicate code if it comes at the expense of simplicity and maintainability of the remainder 95% fo the code.
The overall rise of micro services architecture and those trends are only going to increase this way of developing software.
>>Yes, AbstractClassFactorySingletonWhatever classes do exist in the Spring framework, they are there to help abstract away the complexity and flexibility of the framework so you as the application programmer can have a simple, productive programming environment, and you can still reach into the complexity (and extend it) when you need to.
I'm not sure, but there are other language communities which solve the same problem without writing 30 classes just to do variable++
>>Beans, interface inheritance, implementation inheritance and dependency injection are the building blocks of a programming environment that allows me to be very productive, and still write maintainable, testable, extendable and configurable code.
Yet to see such an environment.
It feels like Java community programmers create tons of complexity and then tons of complex frameworks to solve that complexity that shouldn't even exist at the first place.
One argument that I often don't see mentioned when talking about OOP code, especially the admittedly awful enterprise type, is that this style of coding is happening for a reason.
I think the reason is this: you have a large sprawling codebase, consisting of tens of thousands of 'business rules' or more, and you have to find a way to allow cheap run of the mill programmers to make changes to those business rules in an approachable manner.
I don't think it's fair to evaluate enterprise OOP code based on the style you would prefer when undertaking a standalone project, even a large one, when one of the very problems that this enterprise style is trying to solve is how to have hundreds of fairly green developers work on the codebase.
Now, as an exercise imagine taking this social problem I've described above, and implementing it in your favourite programming style, be it lisp/scheme, functional programming, JS or whatever you prefer, and try to honestly imagine whether it would hold up any better under those conditions.
Sorry not quite, I believe that people use OOP in enterprise settings perhaps thinking:
* Encapsulation might make it safer for programmers to modify code without affecting other areas.
* Inheritance will make it easier to set standards for large numbers of high turnover programmers.
and that then it grows complex for various mostly social factors.
The testable hypothesis in this is that if you started with another programming model, and applied the same social pressures to it, it would end up just as ugly and complex.
These codebases have hordes of programmers banging away on them, with little regard for the bigger picture which makes the code ugly and inconsistent.[1]
Architects then react to this and attempt to enforce standards by introducing patterns and subclasses and facades etc, which is what makes it complex.
Basically, I'm saying that the main straw-man used to argue against OOP: the enterprise codebase is chosen for the wrong reasons.
Put any other programming style under the same pressures and you'll end up with a similar big ball of mud.
Also for those of you doing OOP looking to try something different:
Java/C++ -> Rust:
It will feel familiar to you with its C style syntax and multi-statement function bodies. It takes the best parts of many different paradigms, and marries them in a very coherent way. It's performant, has incredible tooling (cargo, rustup), and the community is great.
Python/Ruby -> Elixir:
Elixir has the Phoenix framework, great support for web, and supports massive concurrency using the Actor model. Jose did a great job cleaning up the Erlang syntax, and it's gradual typing and heavy use of macros will feel immediately familiar to you.
It looks like that you are working on stuff that may not benefit from OOP. You seem to be doing heavy « computations », you care more about the data than the logic around it. Probably Haskell suits your usecases more.
Actually I work primarily on web services. We benefit from Haskell in that HKTs allow us to model our service completely in the type system. Our Rest and GraphQL API are completely modeled in a type level DSL. Once we've ingested data, which can then be validated automatically using types, we can then express our data transformation in a way that makes the transitions extremely transparent in our code.
Our backend services are written in Rust for things that need efficient computation.
That sounds very interesting. Did you open source any parts of your code, write any blog posts about it or give any public talks about it? I would love to read/watch whatever you have available to the public :)
I do think this echoes fooyc's point though: this does not sound like an area that would benefit from OOP. Sounds like an ideal use-case for functional.
My personal anecdotal experience is that OOP lends itself well to very very large codebases. A language like java with packaging, classes, and encapsulation strongly encourages some meaningful organization. Even if that organization is implemented poorly by the user, it's better than what I have seen users create in the real world with languages like C.
If you have a project where a single person could reasonably understand all the components then maybe OOP is overkill.
I also recognize others may have had an equal and opposite experience.
I was a programming teacher some years ago and I would sometimes grade two submissions, which had both gotten full marks from the automated tests but where one was 150 loc and the other 500 loc. The most surprising part was that if I read the longer submission first, it would never really seem like there was that much excess code to take away. But obviously there was.
> My personal anecdotal experience is that OOP lends itself well to very very large codebases.
In my experience Java / C++ style OOP creates very very large codebases in the same way. I think its quite surprising that many engineers don't seem to have any idea how much waste there is in their "clean" java code. (Or C++ that follows the same style).
I also almost always find it far easier to read dataflow oriented code or functional code compared to their OO counterparts because all the state transitions are made absolutely clear. In OOP, your system state is spread across every file in the project. Compare to, say, a website implemented using React/Redux. I can inspect a single object to see the entire state of the application. When there's a bug, usually either that object has the wrong value (so I know how to fix it). Or the object has the right value and the bug is in the rendered component. Easy peasy. In comparison, in a java program the bugs often show up between classes. They end up relating to the relative timing of function calls that mutate state. Or the lifecycle of objects conflicting in weird ways. Its much worse for comprehension and much more work to debug. (And yes, I've spent years working in both styles professionally.)
I have a hard time getting on board with the "waste" or loc argument. The things that matter most to me are readability, organization, maintainability, etc. If the structure that provides those things results in there being more lines of code, or more "waste" then that's a price I'm happy to pay.
The context you describe being a teacher is exactly the kind of case I was suggesting OOP may not be beneficial. If you're in a programming class you are generally not maintaining code bases with millions upon millions of lines of code. In that case, in a case where a single person is likely capable of understanding the entire code base, OOP probably isn't bringing any serious benefits.
> I have a hard time getting on board with the "waste" or loc argument. . The things that matter most to me are readability, organization, maintainability, etc
I think people also have different ideas on readability. Some prefer a single file with 2k lines and some prefer 40 small files in 12 folders with 50 lines each. Like the parent says you can often write the same code in 4x length, be it to prepare for future features or just to have it look "clean".
Maybe. I suspect most people who say they prefer the java style are simply wrong. I don't think they've spun up on enough new projects to notice the extra weight that increased size and scale brings. Or they're assuming that the extra complexity is all necessary, and they're mistaken the same way I was while grading those assignments. (Aside: Isn't it super weird how reading code is so rarely encouraged at school?)
I suspect you could objectively measure this - take two implementations of the same problem; one small and one large but where the implementations do the same thing. For example, write a simple website using a stateful OO style. Then write the equivalent code using the mostly stateless react component style. The latter would be functionally equivalent, while using less code. The latter would also use much less local hidden state.
Then measure how long it takes new people to start making meaningful changes to the two respective codebases. I don't think its a matter of opinion or preference. I expect that the functional component model would come out as a clear productivity winner.
The easiest code to change is code you never needed to write in the first place.
The use case I gave is not a simple website, it's a code base with millions of lines of code. I agree that a simple website or small school project is a different situation with different needs.
Have you tried something with typeclasses, ADTs, or some form of interfaces? C really doesn't provide any tools to make abstractions and lacks the basics, like namespaces.
My experience with OOP is that it generally drives people to very large codebases that are hard to reason about at scale. Usually it brings in a framework of some sort to manage the complexity of wiring objects together in a manageable way. For example, Java Spring uses annotations to magically wire objects together.
For some reason 'arguments' against OOP seem to follow a common pattern. You have said many things against OOP, but you haven't actually presented an argument for why it's bad. I'll present each of your assertions here individually to clarify.
> OOP is prove a poor model for computation
> OO code is almost always significantly more complex and error prone than an equivalent computation written in a concurrent, functional, or structured paradigm.
These claims may or may not be true, but they aren't very useful at all if you don't provide a justification as well as merely asserting them.
> Recent trends in language design (see Rust, Go, Elixir) also seem to be abandoning OOP in favor of other models
There are a number of ways to account for these trends besides OOP being an intrinsically bad model. The most obvious one is that there are fashions in language design, and right now OOP is not fashionable. We already know that; it's not a strong argument against it. Ironically, many in the OOP opposition use the same argument to justify OOP every getting popular in the first place: "it was just fashionable."
> OOP provides competing ways of abstracting behavior that in Haskell we can model with type parameters and constraints.
> Objects are not truly encapsulated in the way Erlang processes are and a poor fit for SMP.
You have pointed out here that more classical OOP languages do things differently from Haskell and Erlang. This should be expected and is not an argument against those OOP languages. (Yes, you could say, "Erlang is better at concurrency" because of the way in which it's different—but my understanding is it's pretty well accepted that Erlang is sort of freak of nature here, so it's not a good argument against OOP generally.)
> Objects also pathologically hide data in attempt to manage mutability, making it impossible to reason about the memory layout of the program.
They do hide data, but the 'pathologically' is something you've added on your own. There is a design philosophy in which this data hiding plays an important, positive role. When you say, "making it impossible to reason about the memory layout of the program." —this sounds to me like missing the point of that design philosophy: the purpose (and oftentimes tradeoff) of higher-level languages is that you don't need to personally manage these details. I think it's largely an application-dependent thing: you many be writing code that requires that, but not all interesting software hinges on low-level performance tuning.
> OOP is a toolkit for building bad abstractions:
> ... abstractions that do not easily model computation
> ... that hide data
> ... and has tended to create overly complex solutions to problems that are often full of errors that a language focused more on type expressivity could catch at compile time.
Another collection of unjustified assertions, except the 'hide data' part which I accounted for earlier.
So across ~10 negative assertions about OOP you have 3 quasi-justifications: newer languages aren't using OOP as much, hiding data is bad, and Erlang is better for SMP.
The pattern you talk about is mainly a product of not wanting to squeeze a whole essay into an HN comment.
I also suspect the problems with OOP are hard to communicate. I for one always had a problem with OOP, but I could never quite point it out. Sure, when faced with an OOP design, I could almost always find simplifications. But maybe I never saw the good designs? Maybe this was OOP done wrong?
I do have reasons to think OOP not the way (no sum types, cumbersome support for behavioural parameters, and above all an unreasonable encouragement of mutability), but then I have to justify why those points are important, and why they even apply to OOP (it's kind of a moving target). Overall, all I'm left with is a sense of uneasiness and distrust towards OOP.
> The pattern you talk about is mainly a product of not wanting to squeeze a whole essay into an HN comment.
That may very well apply to the GP's comment—but, my observation of the pattern is derived from a mix of mini-essay comments, and articles people are writing on Medium or their blogs or whatever, where the space constraints aren't so tight.
There are a couple things you'll regularly find: laughably bad straw-men (GP is free of these), overly vague statements that only survive scrutiny because of their vagueness (e.g. when the GP says OOP produces "abstractions that do not easily model computation"), and unjustified claims.
The net effect is something that sounds bad, but if looked at closely carries very little force.
I suspect the reasons for it are:
1) Actually evaluating a language paradigm is more difficult than these folks suspect. Their view matches their experience and they assume their experience is more global than it really is. Additionally, we don't have a mature theoretical framework for making the comparisons.
2) People are arguing for personal reasons. They have committed themselves to some paradigm and they want to feel secure in their justification for doing so.
> Additionally, we don't have a mature theoretical framework for making the comparisons.
This is really the problem. As much as I have strong opinions and beliefs about how to architect code, every argument I come up with boils down to some flavor of "I like it better this way". Which is true -- I do like it better this way -- but hardly actionable, and it doesn't get at the essence of why I like it better.
The problem with making everything an object -- or more precisely, having lots of mutable objects in an object space with a complex dependency graph -- is that it becomes very hard to model both how the program state changes over time and what causes the program state to change in the first place. I think the prevailing OOP fashion is to cut objects and methods apart into ridiculously small pieces, which takes encapsulation and loose coupling to an extreme. This gives rise to the popular quip, "In an OOP program, everything happens somewhere else." I can't think straight in this kind of setting.
I believe that mutable state should be both minimized and aggregated. As much as is humanly possible, immutable values should be used to mediate interactions between units of code (be those functional or OO units), and mutation should occur at shallow points in the call stack. Objects can work well for encapsulating this mutable state, but within the scope of an object, mutation should be minimized and functional styles preferred.
Using a functional style doesn't mean giving up on loose coupling or implementation hiding. Rust, Haskell, and plenty of other languages support these same concepts in the form of parametric polymorphism, e.g. traits or typeclasses. It does mean giving up on the idea that you can mutate state whenever it's convenient. Instead, you have to return a representation of the action you'd like to take, and let the imperative shell perform that action.
Speaking of imperative shells and functional cores, Gary Bernhardt's talk called "Boundaries" is an excellent overview of this kind of architecture [1]. There was also a thread here on HN about similar principles [2].
That makes a lot of sense to me. Looking forward to checking out "Boundaries".
Btw, one other idea I've had on the subject is that the problems with mutable state can be mitigated if we were able to more easily see/comprehend the state as it's being modified by a program; without that capability the only recourse we're left with is our imagination, which of course is woefully inadequate for the task. You can see more concretely what I'm talking about in my project here (video): http://symbolflux.com/projects/avd
From what I've seen, structuring a program to not modify state is almost always more difficult than the alternative[0]. There are certain problems where this difficulty is justified (because of, e.g., reliability demands); but I think most problems in programming are not those, and if we could just mitigate the error-proneness of state mutation, that may leave us at a good middle ground.
[0] The exception is when you're in a problem domain that can naturally be dealt with via pure functions, where you're essentially just mapping data in one format to another (i.e. no complex interaction aspects).
Oh, that's very cool! I had a similar idea years ago, but I didn't have the technical chops to pursue it at the time, and I ended up losing interest. I think this would actually be even more useful in the kind of architecture I'm describing, since the accumulated state has a richer structure, and many of the smaller bits of state that would be separate objects are put into a larger context.
> From what I've seen, structuring a program to not modify state is almost always more difficult than the alternative
You're not wrong! I don't think we should get rid of mutable state, but I do think we should be much more cognizant of how we use it. Mutation is one of the most powerful tools in our toolbox.
I've found that keeping a separation between "computing a new value" and "modifying state" has a clarifying effect on code: you can more easily test it, more easily understand how to use it, and also more easily reuse it. My personal experience is that I can more easily reason locally about code in this style -- I don't need to mentally keep track of a huge list of concepts. (I recall another quip, about asking for a monkey and getting the whole jungle.)
There is a large web app at my workplace that is written in this style, and it is one of the most pleasant codebases I've ever been dropped into.
Interestingly, I think I built that project with an architecture somewhat reminiscent of the 'boundaries' concept (still just surmising at this point). It's a super simple framework with two types of things 'Domains' and 'Converters'. Domains are somewhat similar to a package... but with the boundaries actually enforced, so that you have to explicitly push or pull data through Converters to other Domains; Converters should just transform the format from one Domain to that of another (they are queue-based; also sometimes no translation is necessary).
I'll quote from the readme:
> This Domain/Converter framework is a way of being explicit about where the boundaries in your code are for a section using one ‘vocabulary,’ as well as a way of sequestering the translation activities that sit at the interface of two such demarcated regions.
Inside each Domain I imagine something like an algebra... a set of core data structures and operations on them.
But yeah, I have very frequently thought about visualizing its behavior while working on that visualizer :D
Is your research related to programming languages?
Also I'm going to have to think about "computing a new value" vs. "modifying state" —not sure I quite get it...
> Is your research related to programming languages?
Yep: I just finished a Master's degree with a focus on programming language semantics and analysis. I'm interested in all kinds of static analyses and type systems -- preferably things we as humans can deduce from the source without having to run a separate analysis tool.
> Also I'm going to have to think about "computing a new value" vs. "modifying state" —not sure I quite get it...
It's kind of a subtle distinction. A value doesn't need to have any particular locus of existence; semantically, we only care about its information content, not where it exists in memory. As a corollary, for anyone to use that value, we have to explicitly pass it onward.
On the other hand, mutation is all about a locus of existence, since ostensibly someone else will be looking at the slot you're mutating, and they don't need to be told that you changed something in order to use the updated value. (Which is the root of the problem, quite frankly!)
I upvoted you.
Why OOP is bad: In 8 years professional (for money) programming experience i had zero use cases for OOP. Every time I tried to use OOP it backfired and I abandoned it. Maybe I just never really understood OOP or maybe I already use OOP all the time without calling it OOP.
My main conclusion on the subject is that it really depends on what domain you're coding in.
There are plenty of applications I would never use OOP for. I use Elixir for my backend work. I tend to use OOP for interactive simulation/game sorts of applications.
What were the types of problems and systems you have worked on? Which languages and programming styles (eg functional) did you end up using to solve them?
Mostly math, optimization and data transformations. Since java8 I use functional style (map, filter, monads..). A lot of business logic is also in sql queries.
Bad code is bad code, no matter which language. Languages won't save you from poor design choices.
OOP is popular in large enterprise systems because it (purports to) promote encapsulation and abstraction, which allows many teams/people to interact. Organizations may like OOP because it helps reinforce their drive for independence and fiefdom (see Conway's law https://en.m.wikipedia.org/wiki/Conway%27s_law).
Whether it is the best paradigm, the most practical, 'just good enough', the wrong choice is anecdotal.
Also, in experience many companies would rather fail conventionally than succeed unconventionally.
> Do you think the tool you use to achieve a task doesn’t matter?
I think many people do think that, with a twist; specifically, when presented with a higher level language they'd argue "it's just a tool", while a lower level language would be "the wrong tool for the job".
I.e, for a hypothetical Ruby programmer, Haskell = "just a tool", C = "the wrong tool".
You're right, people go with a all or nothing. Yet no OO is the same in two language. This is problematic both ways, I have a lot of people ask me how the hell do you model entities in FP? Because they equate entity modeling with OO.
From my view, here are the concrete issues I have with some of the OO flavors out there.
1) It encourages shared mutable state.
OO creates a new layer of shared state, the object fields. In its very essence, the idea is to have methods which accesses and modifies the object state through direct access of its fields. Thus each field is mutated by the object methods which directly access them. You need to look at the method code to know what state they read and write too. You can not make the fields immutable, because that renders non read only methods useless. You must thus coordinate access of the fields between the methods using explicit locks. Over time, in practice, it also creates massive classes with too many fields and too many methods that only make use of a subset of them, degenerating into even more of a globally shared state structure.
2) It makes dependency injection trickier and thus discourages its use.
Once again, the idea of methods to have direct access to fields is the cause of this, it means that methods don't have their dependencies injected, instead they go find them themselves, through direct access. Testing becomes hard, configuration is pushed down the stack inside the methods and reuse is made harder.
3) It makes inversion of control trickier and thus discourages its use.
Because code can only be passed around wrapped inside an object, and objects are heavy constructs requiring a lot of verbosity to create: a new file is required, a class must be defined, an instance must be created, etc. It means that in OOP it is rare to see code being injected from caller to callee. Instead, conditionals creep inside the callee, and configuration parameters are passed it.
4) It handles stateless code poorly, and thus discourages its use.
Code that requires no state over time, aka stateless operations must be wrapped inside a class for no good reason. OO provides nothing to such code. OO is designed for statefull code. A class with no fields, it's not useful. Why do I need one if all my code is stateless? So people start using them like namespaces.
5) Inheritance is too easy to mess up.
Inheritance as a mechanism for code reuse appeared to be pretty smart at first. The problem is it turned out its pretty hard to do right. You soon find yourself with hidden state from parents and confusing override hierarchies, which forced us into single inehritance chains, which limited code reuse, etc. Bottom line, it just created ton of hidden coupling.
6) Objects are not extendable from the outside.
You can't add functionality or state to an object from the outside. You need to modify the source code of the class directly to do so, or rely on defining new subclasses through inheritance. This minimizes code reuse, and encourages object code to grow ever so bigger as more and more features are added.
That's all I can think of for now. Now, some flavours of OOP work differently in that some or all of these problems might not exist or have solutions to them. Which is why I agree with you, it's best to know the actual problems so you can spot them. Just saying something is OOP doesn't imply all of them will exist.
A lot of languages allow stateless subroutines to exist on their own and offer a seperate namespace system not conflated with the OO layer. Trait or mixin like systems enable open extension. Inehritance can support multiple parents, or composition is used in its place. Some allow subroutines to also be passed around, not needing to be wrapped in an object. Dependency injection frameworks were added to simplify and encourage its use. Value objects allow a bit more support for immutability. Etc.
FP doesn't suffer from these issues though. That said, it has others in its place, and different FP languages have also found different solution to them. That said, in my experience with many languages that were more OO oriented and ones that were more FP oriented, I found FP overall had less issues and had found cleaner solutions to its limitations.
I don't think it is the case that a certain programming paradigm can be intrinsically harder, maybe only less practiced.
OOP / imperative style is almost the norm nowadays, but that doesn't imply it's the "easiest", just the one that got the most momentum (which can be attributed to social factors moreso than technical ones), and thus is widely taught and talked about, which helps with education on such style.
Someone with 15 years programming OOP told me once: here is the best way to describe object oriented programming: you asked for object “monkey”, and you got the whole jungle, as well as monkey’s bananas.
I don't really follow this. A fleshed-out example describing the trade-offs is almost mandatory for this kind of criticism since this could be very easily due to a misunderstanding or misapplication of OOP rather than a problem with it.
I don't have enough experience to agree or disagree with the quote you're responding to, but I think the intent was in part to claim that OOP gets misapplied enough that if you're working with a large enough group or company, that's what you'll end up with.
> Because the problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.
> my view on OOP (let OOP denote class based OOP as found in Java or C++)
Maybe you should take a look at a real OOP language like Smalltalk before you tell us why OOP is a bad idea. Otherwise, I might tell you why consumers will never adopt cars as I drove a Trabant once.
Smalltalk OOP looks almost exactly like Erlang gen_servers and shares a lot of nice properties with them. Unfortunately the concept has been twisted into it's current day meaning that's akin to the OOP we find in Java, C++ etc.
In fact, I did. But I think it is wrong to judge OOP based on crude implementations. That is the reason why I added the Trabant example:
> The Trabant was loud, slow, poorly designed, badly built, inhospitable to drive, uncomfortable, confusing and inconvenient. (source: https://en.wikipedia.org/wiki/Trabant)
Java and C++ are both great tools for specific jobs, but from an OOP perspective, they are badly designed. So if you want to discuss OOP as a design choice, it would be unreasonable to discuss it based on those poor implementations. Smalltalk, on the other hand, has its own set of problems, but a poor OOP implementation is not part of it.
Ah, I see what you mean. I do think it's valuable/reasonable to discuss OOP as it is 'in the wild', but I can also see how it's a bit of a dead horse to beat, and discussing what could be
(or didn't become the mainstream) is at least more interesting.
I started as a C++/Python developer. I learned C++ and Python in school, and wasn't exposed to Haskell or Erlang until later in my career. I've also professionally written Ruby, Java (Swing/Spring), Rust, and Go.
Interesting view.
How should i, as a sysadmin who just writes bash and powershell scripts, start to learn _serious_ programming? Is it still worth to force myself into OOP?
I argue that you should start simple with functions and IF you see a use case for classes use them.
To be able to make a reasonable choice, you must learn OOP at least the OOP machinery of your language.
A few hints:
a) Python's abc or Java Interfaces serves as template to implement multiple times the same behavior in different contexts
b) If you don't inhirit the class, the class is useless, except if you use the class as an interface (see a)
c) A class is more complex than a function ie. more guns to shot yourself in the foot.
d) think function pointer / first-class function. Many times, you can avoid a class by passing a function as parameter e.g. filter.
e) think function factory ie. a function that returns a function.
f) think pure function / side effect free functions. This is a BIG life savior. A function without side effects is much simpler to test that a function with side effects. Of course, you can not avoid all side effects (io et al.) or mutations, so build your abstraction so that the side-effects / mutation to happen in a well known place.
How so? I believe I was addressing the fundamentals of OOP: Classes, methods, inheritance, overloading, visibility, etc.
Also, Rust, while allowing the assigning of behavior to a type, is explicitly not OOP. Check it's Wikipedia page (which doesn't include OOP in the list of paradigms) and the O'Reilly book (which says Rust isn't OOP).
So, I wrote that chapter, and part of why we don't really discuss definitions is that there are so many conflicting ones. Instead, we tried to focus on goals, and how they'd apply to Rust.
Personally, I believe the two big OOP definitions are "Java OOP" and "Smalltalk OOP", and Rust fits neither. People coming from heavy OOP backgrounds really struggle with Rust for this reason. It's also why this chapter was the hardest one to write.
Except that Rust lacks almost all of the features and terminology you'd expect of an OOP language. Again, I take OOP to mean what one learns in school... A class based language with inheritance, overloading, visibility of members, constructors, etc.
Rust can define behavior on any type of data, including scalar values. There is no concept of a "class" and constructors and simple functions that return instantiated values. They are not automatically called when a value is instantiated. Traits are more like interfaces, they constrain an implementer. A value has no way of inheriting behavior. Visibility is at the package level, and Rust offers no notion of encapsulation inside of a package.
Rust is, by the trending definition of OOP, not OOP.
> Again, I take OOP to mean what one learns in school
Don't do that, then. "OOP" is a term-of-art in an academic discipline. It means exactly what it was used to mean by the people who coined the term in the papers they coined it in.
And, before you ask: no, there is no academic jargon term for "the thing that C++ and Java are." From an academic perspective, neither language has any particular unifying semantics. They're both just C with more syntax.
OOP is a different set of semantics, based around closures (objects) with mutable free variables (encapsulated state), where an object's behavior is part of its state.
C++ and Java can simulate this—you might call this the Strategy pattern—but if you build a whole system out of these, then you've effectively built an Abstract Machine for an actually Object-Oriented language, one that ends up being rather incompatible with the runtime it's sitting on top of.
Most people take "OOP" to mean what schools told them. And that's why "OOP" does mean that. Because most people vaguely agreed on some blurry definition, and use it.
The fact that "OOP" is no longer used to point to what Alan Kay originally meant is immaterial. It's a shame, but that train has passed.
I know you're arguing for "words are communication", and I'm a writer, so I certainly I agree with that. In general.
OOP is not a word, though. It's a jargon term. You can't redefine those. They mean what they originally meant, because if they don't, then you lose the ability to understand what the people doing real work using the word by its proper definition are doing. Jargon terms don't drift.
To be clear, I'm mostly talking about the same thing that is true of the term "Begging the question." Laymen can use it to mean whatever they want—and I don't begrudge them that, it's a phrase in a language and people will do what they like with it. But that lay-usage will never change what the phrase means in the context of formal deductive reasoning.
Likewise, programmers can use "objects" and "OOP" to mean whatever they want it to mean—but when having a formal academic discussion about programming language theory, an "object" refers to a specific thing (that came about in LISP at MIT before even Kay; Kay just was the first to write down his observations of the properties of "objects") and "OOP" refers to programming that focuses on such "objects" (as implemented in Smalltalk, CLOS, POSIX processes, Erlang processes, bacteria sharing plasmids, or computers on a network; but not by C++ or Java class-instances.)
I don't think we disagree, here; you're arguing that the lay-usage is X, and the lay-usage is X. I'm just pointing out that the lay-usage is irrelevant in the context of a discussion that requires formal academic analysis of the concept.
You can. Anyone that's been around computing knows that “functional programming” (and even “imperative programming”, which at one point contrasted with structured programming) have drifted
> Jargon terms don't drift.
Jargon terms absolutely drift (and get overloaded) for the same reason as other terms do,the difference is the community of use in which those factors which drive drift/overloading operate.
> To be clear, I'm mostly talking about the same thing that is true of the term "Begging the question." Laymen can use it to mean whatever they want—and I don't begrudge them that, it's a phrase in a language and people will do what they like with it. But that lay-usage will never change what the phrase means in the context of formal deductive reasoning.
Mostly aside, but the popular alternative usage of that phrase is transitive verb phrase, and the older usage is an intransitive verb phrase (which, while this is clearly reversing the etymology, can be viewed as a special case of the transitive form with a particular direct object assumed) so the two neither conflict nor are incompatible. So, it's kind of a bad example of anything other than reflexive pedantry; accepting the alternative usage in formal circles wouldn't be drift or overloading because it is structurally distinct.
I didn't mean rust was fundamentally OOP, but that rust implements, through runtime traits, exactly the behavior you are criticizing OOP for. Sorry about that
Can you please point out exactly what I said when you say 'the behavior I'm criticizing OOP for'. Haskell typeclasses are very different from OOP inheritance or OOP classes. In any case, I don't see how that relates to the explicit criticisms I made.
The engineering organization where I work is full of dysfunction. This is because numerous poor practices were ignored or encouraged over the years. Now there is strong inertia against change because the incompetent long-timers know all the tricks and manage their job security through the system as it is.
In your top-level comment you argued it's a bad model and didn't really give an explanation for why. You just make assertions that things are pathological/overly-complex/error-prone in OOP and that other alternatives are better, that it's impossible to reason about memory layout, etc. But I could just as easily assert the opposite is true: that proper OOP does simplify problems down to manageable parts, that encapsulation is what you're supposed to do in proper OOP, that abstracting away memory layout is a good thing, etc... I'm not sure if you find these compelling or not, but if not, then I guess you could see why others feel similarly about your arguments.
IMHO OOP was a part of the big Software Factory model: a relatively simple, quick to understand tool to onboard masses of superficially trained workforce. A few enlightened Architects and Leaders would paint large strokes of class, collaboration, sequence diagrams while hundreds of "coders" would translate these "visions" into shippable artifacts.
OOP provides competing ways of abstracting behavior that in Haskell we can model with type parameters and constraints. Objects are not truly encapsulated in the way Erlang processes are and a poor fit for SMP. Objects also pathologically hide data in attempt to manage mutability, making it impossible to reason about the memory layout of the program.
All in all, OOP is a toolkit for building bad abstractions: abstractions that do not easily model computation, that hide data, and has tended to create overly complex solutions to problems that are often full of errors that a language focused more on type expressivity could catch at compile time.