Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Encapsulation is the good part of object-oriented programming for precisely this reason, and most serious software development relies heavily on encapsulation. What's bad about OOP is inheritance.

Microservices (in the sense of small services) are interesting because they are good at providing independent failure domains, but add the complexity of network calls to what would otherwise be a simple function call. I think the correct size of service is the largest you can get away with that fits into your available hardware and doesn't compromise on resilience. Within a service, use things like encapsulation.



Inheritance is everyone's favorite whipping boy, but I've still never been in a codebase and felt like the existing inheritance was seriously hindering my ability to reason about it or contribute to it, and I find it productive to use on my own. It makes intuitive sense and aids understanding and modularity/code resuse when used appropriately. Even really deep inheritance hierarchies where reasonable have never bothered me. I've been in the industry for at least 8 years and a volunteer for longer than that, and I'm currently in a role where I'm one of the most trusted "architects" on the team, so I feel like I should "get it" by now if it's really that bad. I understand the arguments against inheritance in the abstract but I simply can't bring myself to agree or even really empathize with them. Honestly, I find the whole anti-inheritance zeitgeist as silly and impotent as the movement to replace pi with tau, it's simply a non-issue that's unlikely to be on your mind if you're actually getting work done IMHO.


The problem of inheritance is that it should be an internal mechanism of code reuse, yet it is made public in a declarative form that implies a single pattern of such reuse. It works more or less but it also regularly runs into limitations imposed by that declarativeness.

For example, assume I want to write emulators for old computer architectures. Clearly there will be lots of places where I will be able to reuse the same code in different virtual CPUs. But can I somehow express all these patterns of reuse with inheritance? Will it be clearer to invent some generic CPU traits and make a specific CPU to inherit several such traits? It sounds very unlikely. It probably will be much simpler to just extract common code into subroutines and call them as necessary without trying to build a hierarchy of classes.

Or lets take, for example, search trees. Assume I want to have a library of such trees for research or pedagogic purposes. There are lots of mechanisms: AVL trees, 2-3, 2-3-4, red-black, B-Trees and so on. Again there will be places where I can reuse the same code for different trees. But can I really express all this as a neat hierarchy of tree classes?


> The problem of inheritance is that it should be an internal mechanism of code reuse, yet it is made public in a declarative form that implies a single pattern of such reuse.

Not quite. A simplistic take on inheritance suggests reusing implementations provided by a base class, but that's not what inheritance means.

Inheritance sets a reusable interface. That's it. Concrete implementations provided by a base class, by design, are only optional. Take a look at the most basic is-a examples from intro to OO.

Is the point of those examples reusing code, or complying with Liskov's substitution principle?

The rest of your comment builds upon this misconception, and thus is falsified.


Polymorphism is not related to inheritance. In earlier object-oriented systems it was (and is), but only because they were trying all directions. It actually becomes clearer without inheritance and many modern systems introduce it as a separate concept of an interface.

For example, I am sending requests to an HTTP server. There are several authentication methods but when we look at request/method interaction they are similar. So it would be convenient to have standard interface here, something like 'auth.applyTo(request)'. Yet would it be a good idea to try making different 'Auth' methods to be subclasses of each other?

Or another example I'm currently working on: I have a typical search tree, say, AVL, but in my case I need to make references to cells in the tree because I will access it bottom-up. As the tree changes its geometry the data move between cells so I need to notify the data about the address change. This is simple: I merely provide a callback and the tree calls it with each new and changed cell address. I can store any object as long as it provides this callback interface. Does this mean I need to make all objects I am going to store in a tree to inherit some "TreeNotifiable" trait?

Polymorphism happens when we split a system into two components and plan interaction between them. Internals of a component do not matter, only the surface. Inheritance, on the other hand, is a way to share some common behavior of two components, so here the internals do matter. These are really two different concepts.


My example complied perfectly with Liskov's substitution principle. Much better than examples like "a JSON parser is a parser". The system I worked on had perfect semantic subtyping.

Liskov substitution won't save you, and I'm quite tired of people saying it will. The problem of spaghetti structures is fundamental to what makes inheritance distinct from other kinds of polymorphism.

Just say no to inheritance.


> [...] it's simply a non-issue that's unlikely to be on your mind if you're actually getting work done IMHO.

Part of why I get (more) work done is that I don't bother with the near-useless taxonomical exercises that inheritance invites, and I understand that there are ways of writing functions for "all of these things, but no others" that are simpler to understand, maintain and implement.

The amount of times you actually need an open set of things (i.e. what you get with inheritance) is so laughably low it's a wonder inheritance ever became a thing. A closed set is way more likely to be what you want and is trivially represented as a tagged union. It just so happens that C++ (and Java) historically has had absolutely awful support for tagged unions so people have made do with inheritance even though it doesn't do the right thing. Some people have then taken this to mean that's what they ought to be using.

> I've been in the industry for at least 8 years and a volunteer for longer than that, and I'm currently in a role where I'm one of the most trusted "architects" on the team, so I feel like I should "get it" by now if it's really that bad.

I don't think that's really how it works. There are plenty of people who have tons of work experience but they've got bad ideas and are bad at what they do. You don't automatically just gain wisdom and there are lots of scenarios where you end up reinforcing bad ideas, behavior and habits. It's also very easy to get caught up in a collective of poorly thought out ideas in aggregate: Most of modern C++ is a great example of the kind of thinking that will absolutely drag maintainability, readability and performance down, but most of the ideas can absolutely sound good on their own, especially if you don't consider the type of architecture they'll cause.


The difference between inheritance and composition as tools for code reuse is that, in composition, the interface across which the reused code is accessed is strictly defined and explicit. In inheritance it is weakly defined and implicit; subclasses are tightly coupled to their parents, and the resulting code is not modular.


So you've never worked on a code base with a 3-level+ deep inheritance tree and classes accessing their grandparent's protected member variables and violating every single invariant possible?


> 3-level+ deep inheritance tree and classes accessing their grandparent's protected member variables

Yes, I have. Per MSDN, a protected member is accessible within its class and by derived class instances - that's the point. Works fine in the game I work on.

> violating every single invariant possible

Sure, sometimes, but I see that happen without class inheritance just as often.


If you are reading a deep * wide inheritance hierarchy with override methods. You will have to navigate through several files to understand where the overrides occurred. Basically multiply the number of potential implementations by inheritance depth * inheritance width.

You may not be bitten by such an issue in application code. But I've seen it in library code. Particularly from Google, AWS, various Auth libraries, etc. Due to having to interop with multiple apis or configuration.


I'm glad it's been useful to you!

I can only share my own experience here. I'm thinking of a very specific ~20k LoC part of a large developer infrastructure service. This was really interesting because it was:

* inherently complex: with a number of state manipulation algorithms, ranging from "call this series of external services" to "carefully written mutable DFS variant with rigorous error handling and worst-case bounds analysis".

* quite polymorphic by necessity, with several backends and even more frontends

* (edit: added because it's important) a textbook case of where inheritance should work: not artificial or forced at all, perfect Liskov is-a substitution

* very thick interfaces involved: a number of different options and arguments that weren't possible to simplify, and several calls back and forth between components

* changing quite often as needs changed, at least 3-4 times a week and often much more

* and like a lot of dev infrastructure, absolutely critical: unimaginable to have the rest of engineering function without it

A number of developers contributed to this part of the code, from many different teams and at all experience levels.

This is a perfect storm for code that is going to get messy, unless strict discipline is enforced. I think situations like these are a good stress test for development "paradigms".

With polymorphic inheritance, over time, a spaghetti structure developed. Parent functions started calling child functions, and child functions started calling parent ones, based on whatever was convenient in the moment. Some functions were designed to be overridden and some were not. Any kind of documentation about code contracts would quickly fall out of date. As this got worse, refactoring became basically impossible over time. Every change became harder and harder to make. I tried my best to improve the code, but spent so much time just trying to understand which way the calls were supposed to go.

This experience radicalized me against class-based inheritance. It felt that the easy path, the series of local decisions individual developers made to get their jobs done, led to code that was incredibly difficult to understand -- global deterioration. Each individual parent-to-child and child-to-parent call made sense in the moment, but the cumulative effect was a maintenance nightmare.

One of the reasons I like Rust is that trait/typeclass-based polymorphism makes this much less of a problem. The contracts between components are quite clear since they're mediated by traits. Rather than relying on inheritance for polymorphism, you write code that's generic over a trait. You cannot easily make upcalls from the trait impl to the parent -- you must go through a API designed for this (say, a context argument provided to you). Some changes that are easy to do with an inheritance model become harder with traits, but that's fine -- code evolving towards a series of messy interleaved callbacks is bad, and making you do a refactor now is better in the long run. It is possible to write spaghetti code if you push really hard (mixing required and provided methods) but the easy path is to refactor the code.

(I think more restricted forms of inheritance might work, particularly ones that make upcalls difficult to do -- but only if tooling firmly enforces discipline. As it stands though, class-based inheritance just has too many degrees of freedom to work well under sustained pressure. I think more restricted kinds of polymorphism work better.)


> This experience radicalized me against ...

My problem with OO bashing is not that it isn't deserved but seems in denial about pathological abstraction in other paradigms.

Functional programming quickly goes up it's own bum with ever more subtle function composition, functor this, monoidal that, effect systems. I see the invention of inheritance type layering just in adhoc lazy evaluated doom pyramids.

Rich type systems spiral into astronautics. I can barely find the code in some defacto standard crates instead it's deeply nested generics... generic traits that take generic traits implemented by generic structs called by generic functions. It's an alphabet soup of S, V, F, E. Is that Q about error handling, or an execution model or data types? Who knows! Only the intrepid soul that chases the tail of every magic letter can tell you.

I wish there were a panacea but I just see human horrors whether in dynamically-typed monkey-patch chaos or the trendiest esoterica. Hell I've seen a clean-room invention of OO in an ancient Fortran codebase by an elderly academic unaware it was a thing. He was very excited to talk about his phylogenetic tree, it's species and shared genes.

The layering the author gives as "bad OO" admin/user/guest/base will exist in the other styles with pros/cons. At least the OO separates each auth level and shows the relationship between them which can be a blessed relief compared to whatever impenetrable soup someone will cook up in another style.


The difference, I think, is that much of that is not the easy path. Being able to make parent-child-parent-child calls is the thing that distinguishes inheritance from other kinds of polymorphism, and it leads to really bad code. No other kind of polymorphism has this upcall-downcall-upcall-downcall pattern baked into its structure.

The case I'm talking about is a perfect fit for inheritance. If not there, then where?


Encapsulation arguably isn’t a good part, either. It encourages complex state and as a result makes testing difficult. I feel like stateless or low-state has won out.


Encapsulation can be done even in Haskell which avoids mutable state by using modules that don't export their internals, smart constructors etc. instead. You can e.g. encapsulate the logic for dealing with redis in a module and never expose the underlying connection logic to the rest of the codebase.


Hmm, to me encapsulation means a scheme where the set of valid states is a subset of all representable states. It's kind of a weakening of "making invalid states unrepresentable", but is often more practical.

Not all strings are valid identifiers, for example, it's hard to represent "the set of all valid identifiers" directly into the type system. So encapsulation is a good way to ensure that a particular identifier you're working with is valid -- helping scale local reasoning (code to validate identifiers) up into global correctness.

This is a pretty FP and/or Rust way to look at things, but I think it's the essence of what makes encapsulation valuable.


What you’re talking about is good design but has nothing to do with encapsulation. From Wikipedia:

> In software systems, encapsulation refers to the bundling of data with the mechanisms or methods that operate on the data. It may also refer to the limiting of direct access to some of that data, such as an object's components. Essentially, encapsulation prevents external code from being concerned with the internal workings of an object.

You could use encapsulation to enforce only valid states, but there are many ways to do that.


Well whatever that is, that's what I like :)


Not only network calls, but also parallelism, when that microservice does some processing on its own, or are called from a different microservice as well.

Add to it a database with all the different kinds of transaction semantics and you have a system that is way above the skillset of the average developer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: