Hacker News new | past | comments | ask | show | jobs | submit login

Okay, having read the article, I think it goes too far.

Inheritance hierarchies have their issues, and raganwald touches on them, but there's a strawman argument here.

(Incidentally, raganwald, I've noticed this about all your OO articles. You seem to have a bias against class-based design. It's causing your essays to be less brilliant than they could be.)

Fundamentally, you can think of inheritance as a special case of composition. It's composition combined with automatic delegation.

In other words, if you have A with method foo() and B with method bar(), "A extends B" is equivalent [1] to "A encapsulates an instance of B and exposes 'function bar() { return this._b.bar(); }'."

This is very useful when you want polymorphism. Writing those delegators is a pain in the butt.

More importantly, it tells us how to use inheritance safely. Only use inheritance when 1) you want to automatically expose all superclass methods, and 2) don't access superclass variables.

Now, JavaScript does have the specific problem that you can accidentally overwrite your superclass's variables, and that's worth talking about. But I think that saying "inheritance is bad" goes too far. The article would be stronger if talked about when inheritance is genuinely useful, the problems it causes, and how to avoid them.

Edit: In particular, I want to see more about polymorphism. Polymorphism is OOP's secret superpower. Edit 2: I'm not saying polymorphism requires inheritance.

[1] Not quite equivalent.




>> In other words, if you have A with method foo() and B with method bar(), "A extends B" is equivalent [1] to "A encapsulates an instance of B and exposes 'function bar() { return this._b.bar(); }'."

Your point would hold only if all subclasses agreed to use only the public interface of their parents. But if you do that, your "inheritance" isn't really classical inheritance any more, it's just something that saves you typing when implementing delegation, like ruby's method_missing.

The article is not taking issue with composition + "an easy, terse delegation mechanism." The article is taking issue with actual inheritance: the sharing of private state between class and subclass. Your claim that the two things are equivalent just isn't true.


Sharing private state is not fundamental to inheritance. That's my point. It's a bad idea and people who use inheritance (should) know better than to do that.

That's why I said it was a strawman argument. (It's also a bit hypocritical: raganwald says, "JavaScript does not enforce private state, but it’s easy to write well-encapsulated programs: simply avoid having one object directly manipulate another object’s properties." Somehow he fails to apply the equivalent principle to inheritance.)


It's not a strawman: most people who use inheritance directly access properties defined in a superclass. It's a really, really common problem: in fact, most programmers, and most books, don't consider it a problem at all -- they just consider it "using inheritance."

What is happening here is that you are redefining "inheritance" in a new, more restricted way that is not in line with common usage, and then saying "But real inheritance doesn't have these problems...."


> most people who use inheritance directly access properties defined in a superclass

i think the problem is you're both just running on anecdotes.

to add fuel to that fire: i would definitely side with jdlshore on this one. in Objective-C, for instance, you cannot even access a superclass's private properties[1]. most every team i've worked on has avoided protected properties (for languages like Java which even have them) and encouraged even subclasses to talk to their superclass via the superclass's public interface.

[1] of course, you can always declare a category for the superclass which can expose whatever it wants, but subclasses are in no sense privileged in being able to do this.


It's a strawman because his argument is "avoid class hierarchies" but it is not "class hierarchies" that gives rise to this problem it's accessing the private state of another class.

His concluding remarks about class hierarchies are: "Class hierarchies create brittle programs that are difficult to modify.", but he hasn't established that - he's only shown that ignoring encapsulation within a class hierarchy creates brittle programs.

If he wants to argue that class hierarchies encourage that sort of behaviour, and should therefore be avoided, then he is welcome to, but he didn't make that argument.

His evidence only shows that breaking encapsulation is bad, even if it is contained inside a class hierarchy. That make it a strawman because the thing he has torn down is not the thing he is arguing against.


He's established both things. First, that class hierarchies are bad when they break encapsulation (he calls this the engineering problem with them).

But he also argues that they don't accommodate change well by their nature (he refers to this as the semantic problem):

>> Furthermore, the idea of building software on top of a tree-shaped ontology would be broken even if our knowledge fit neatly into a tree. Ontologies are not used to build the real world, they are used to describe it from observation. As we learn more, we are constantly updating our ontology, sometimes moving everything around.

>> In software, this is incredibly destructive: Moving everything around breaks everything. In the real world, the humble Platypus does not care if we rearrange the ontology, because we didn’t use the ontology to build Australia, just to describe what we found there.


> He's established both things. First, that class hierarchies are bad when they break encapsulation (he calls this the engineering problem with them).

His exact quote was "Class hierarchies create brittle programs that are difficult to modify."

But he has not demonstrated the first part of that except in one specific case. That case may (?) be common, but it not inherent in the problem - class hierarchies do not require breaking encapsulation.

His conclusion is not supported by his argument because his argument applies only the the strawman he created, and not to the general case to which his conclusion refers.


If I may try to split the difference: His argument applies much more when trying to do OO in JavaScript, because it's much harder to avoid using the parent class's data when you don't have private variables.


Absolutely.

But that's a universal issue with JavaScript OO in that it provides no direct language support for encapsulation.

Developers have, for the most part, learnt to be disciplined about not accessing "private" fields in JS objects. That they (we) have not learnt to apply that lesson with respect to class hierarchies is evidence for how willingly we throw away good principles when we are working in a "special case".

It is also a lesson in why having language features that force developers into good practices is sometimes a net win, even though we might also rally against them (because we dislike their verbosity and/or hand-holding).


If you don't share private state, then why not just make the leap and switch to typeclasses instead? Then you don't have the restriction of having to implement all the interfaces for a datatype in the same compilation unit.


You're describing a fairly narrow subset of subclassing; notably one that is (as you point out) almost equivalent to composition. I think you're right: this is not a bad thing. However, how useful is it really? The kind of API's I find myself wanting to pass-through are generally small and abstract (and where they're not, I really wish they were).

Languages that make this special form of encapsulation easy suffer for it indirectly (I'm thinking esp. of java/C# here). Auto-passthrough allows huge API's that would otherwise be unwieldy. Unfortunately, inheritance isn't enough, and when that happens, the poorly designed api's with huge surface areas encourage really bad hacks. All in all I'm not convinced that api pass-through is really a net win for a language. There are alternatives too, such as mix-ins or extension methods, that would allow you to manually pass through only the smallest truly necessary core, and just re-mixin the extras.

But this is all about the best-case for inheritance. Inheritance in common usage also contains two other less ideal aspects, however: semi-private methods, and virtual methods (method overriding). I'm skeptical either has value. Overriding almost necessarily means tight coupling - you need to understand at some level how the internal state of the superclass works to replace calls (even those that the superclass itself makes!) to methods of the superclass. Protected methods suffer from a similar problem - what kind of api is public enough to allow access by a subclass but not public enough to allow access by a wrapper? Of course, using protected methods makes the previously described problem of discouraging bad design even worse; it makes it even harder to write a wrapper, necessitating inheritance even when that's perhaps not exactly what you want.

Finally, if all you want is polymorphism, you just need interfaces, not implementation inheritance.

I think you're right in pointing out that the OP's post doesn't do OOP's subtlies justice, but beyond interface inheritance, I don't think I care much for other aspects of inheritance.


* Overriding almost necessarily means tight coupling - you need to understand at some level how the internal state of the superclass works to replace calls (even those that the superclass itself makes!) to methods of the superclass.*

Your problem seems more with badly constructed public apis rather than overriding itself. The only public methods of a class that should be overridable are those with a simple and well-defined expected behavior. Overriding doesn't have to imply tight coupling. So long as it adheres to some expected contract (e.g., input/output ranges and exception handling), an overridable method should be just as much a black box to the parent class as the subclass. An overridable method with no side-effects is equivalent to an initialization parameter to an encapsulated object.

* Protected methods suffer from a similar problem - what kind of api is public enough to allow access by a subclass but not public enough to allow access by a wrapper?*

You have to think of the consumers of the class. Consumers that only need to use the class and are satisfied with the publicly available interfaces only need access to the public methods. Consumers that want to change the behavior of the class (often for the benefit of other consumers) will override the protected methods. Protected methods offer a set of extension points to consumers while providing useful default behavior for those that do not need them.


> You're describing a fairly narrow subset of subclassing; notably one that is (as you point out) almost equivalent to composition.

I'm not sure why you say this is a narrow subset. I'm describing a way of thinking about inheritance (inheritance is like composition + automatic delegation). That way of thinking can be applied to any subclassing operation, and I think it's instructional to do so. It can help you see what's a good idea and what's not.

Let's run down the list. Assume you have a class A with a method foo() and class B with method bar().

The equivalent* of "A extends B" is:

    function A() {
      this._b = new B();              // equivalent* to calling superclass constructor
    }
    A.prototype.bar = function() {
      return this._b.bar();           // manually delegate
    }
Now let's say A accesses its superclass's private variables in order to do something. The equivalent* is:

    A.prototype.foo = function() {
      return this._b._privateVar * 2;     // obviously bad, don't do that!
    }
    
You wouldn't access the private variables of an object you compose; it's obviously bad form. Don't do it when you inherit, either.

Now let's talk overriding the superclass's methods. There are several different ways you could do that. The equivalents* are:

    // Replace superclass method
    A.prototype.bar = function() {
      return "my own thing";            // obviously fine
    }
    
    // Replace superclass method and access superclass state directly
    A.prototype.bar = function() {
      return this._b._privateVar * 2;   // obviously bad, don't do that!
    }
    
    // Extend superclass method
    A.prototype.bar = function() {
      var result = this._b.bar();       // equivalent* to calling superclass method
      return result * 2;                // perfectly okay
    }

    // Extend superclass method and access superclass state directly
    A.prototype.bar = function() {
      var result = this._b.bar();       // equivalent* to calling superclass method
      return result * this._b._privateVar;   // obviously bad, don't do that!
    }
    
Don't access the private variables of your superclass (or objects you compose) and you'll be fine. Sure, you'll be in trouble if the superclass changes the semantics of the parent method, but that's true of all functions everywhere. If the semantics of a function you're using changes, your code probably just broke. It doesn't matter if the function is defined in a superclass or not.

The one thing that's unique to inheritance is the idea of semi-private ("protected") methods that are only visible to subclasses. I agree that they're something to be used sparingly, but they're no different than any other superclass method in how they should be used and overridden. It's a moot point, though, because JS doesn't have them.

*Not exactly equivalent, but close enough for these examples.


I think protected methods are just a bad idea in every case. They don't really offer protection (because a subclass can expose them) and they prevent other forms of composition even where they're more appropriate.

If you're open to inheritance, you're necessarily open to composition - making that messy serves no purpose.

As to this example:

    // Replace superclass method
    A.prototype.bar = function() {
      return "my own thing";            // obviously fine
    }
This is NOT fine - it's really quite nasty. You're breaking encapsulation by affecting how the internals of B work - calls from B's code to bar() will now fail to work as expected. What you want is to use your bar to outside code, but not affect the encapsulation of the superclass.

And note that none of this mitigates the pit-of-complexity that inheritance encourages as described in the post you replied to (i.e. bloated, hard-to-wrap API's). There are inheritance-like techniques that work better and don't have the downsides.


I don't agree, but I'm going to let it drop. I just wanted to reply to say "thanks" for engaging in a thoughtful conversation.

Somebody's been downvoting thoughtful posts like yours (because they disagree with them, I guess). I wish they'd stop and I wanted to let you know it wasn't me. :-)


Hey, thanks for the friendly sign off! A much nicer way to end a conversation.

Given the subtly of these issues (what is the right design of such an artificial construct), I guess the only obvious thing is that there's no obviously right answer - let alone that it's easy to explain the pros+contras in a reasonable amount of time on an online forum such as this :-).


"Unlike method invocation, inheritance violates encapsulation" Item 16 of Effective Java 2nd Edition. To the original article, I would improve the wording so that it is clear whether the author is against class hierarchies in Javascript or classes in Javascript. It is not clear to me.

One practical example: In Javascript (CoffeeScript), null values can propagate for a very long time. Calling a non-existent method throws an error immediately, while using a nonexistent field (because it changed in the super class) is not that easy to track down. From my CoffeeScript experience, almost any inheritance brought us a headache when we rapidly iterated on our code - but that doesn't mean that CS class construct isn't useful in defining recipes for well-encapsulated objects.


Aside:

Your invocation of Effective Java made me look for Effective Javascript, and it does exist. Amazon users give it five stars: http://www.amazon.com/Effective-JavaScript-Specific-Software...

Can anyone comment on how well this books fulfills the expectations implicit in a book calling itself "Effective X?" Or just how effective the book is with respect to accepted javascript practice?


I haven't personally read it (shame on me!) David Herman and I spoke from time-to-time while I was at Mozilla and he's very smart, a very clear communicator and he knows the language as only a language lawyer can. He's actually a member of TC-39 (the ECMAScript committee) and had a big hand in the modules coming in ES6.

By every account I've seen, it's a great book and the only reason I haven't read it is that I've already been bitten by all of JS' pitfalls once or twice :)

The real reason I'm posting a comment, though, is to give you a link to the JS Jabber episode in which they talk to Dave about the book:

http://javascriptjabber.com/044-jsj-book-club-effective-java...

That episode will give you a good idea of what the book is like.


It's a very good book. It explains a lot of 'why' and inner working of good practices.

It's also fairly comprehensive, ranging from - some evilness (type inference with ==, eval and its performance toll) - functions and higher order functions - objects and prototypes. Some good explanations of the all prototype and constructor thing - array, dictionnary and some things to know about their prototypes - api design and concurrency

It's 200 pages full of content, I recommend it.


Polymorphism can be acheived without using inheritance.

See Clojure's Protocols or Haskell's type classes for examples of this.


Two points here regarding Haskell. First, a function with typeclass constraints is less polymorphic (in that it will operate on fewer types) than polymorphic function without typeclass constraints. Of course, [the fully polymorphic version is] also more limited in how it interacts with the corresponding values.

Second, parametric polymorphism in Haskell is statically resolved. You can have polymorphic functions, but any given container still contains a single type. You can still do dynamic polymorphism in Haskell (by storing a list of records of functions, rather than storing data directly) but it doesn't typically involve type classes.


The second point brings up an issue that confused me when I was first learning Haskell; maybe I can help others that are similarly confused. Coming from OO languages, the lack of heterogeneous containers seems painful in Haskell - after all, in OO languages you use containers of superclass or interface pointers all the time. Haskell has a different approach to handling the same problems, though, and it turns out there are several ways you can create (or eliminate the need for) heterogeneous containers.

The idiomatic way you'd create a "heterogeneous container" is to store a single algebraic datatype with different constructors, rather than try to store different types at all. This doesn't actually give you a heterogeneous container, of course, but it works perfectly in most cases, because in most cases the set of things you need to store in the container is closed. You really only need a truly heterogeneous container when you need the ability for someone else to come along and extend that set. Concretely, if you're writing a ray tracing application, you know all the possible shapes you may need to handle, and this approach is perfect.

On the other hand, if you're writing a ray tracing library and you want the library user to be able to define new shapes, you may want to consider another approach. The idiomatic approach here was already mentioned by dllthomas: this is a functional programming language, so use functions! Specifically, use a record of functions, with each function in the record serving the same role as a method in OO. The functions can have private data by closing over it.

Haskell has a couple of other options available, as well.

You can use existential types, but they don't really buy you anything over the record of functions approach other than perhaps looking superficially more like how you'd do things in an OO language. With this approach you define a typeclass and make all the types you want to store instances of it. The container then stores instances of the typeclass, rather than a concrete type.

You can also use Data.Dynamic to create dynamic values, which will allow you to store a truly unconstrained mix of types in the same container. Since you have to cast the dynamic values back to their real type before using them, though, this isn't a great solution - you end up with code that looks similar to chains of 'instanceof' in Java or 'dynamic_cast' in C++.


> The idiomatic approach here was already mentioned by dllthomas: this is a functional programming language, so use functions! Specifically, use a record of functions, with each function in the record serving the same role as a method in OO. The functions can have private data by closing over it.

This is also how one does OO programming in C: just roll your own v-table using a struct of function pointers. You don't get to close over an environment in that case, so you have to be careful to pass everything in.


There are similarities, to be sure. One difference is that state is more often closed over than passed around explicitly.


records of fuctions vs typeclasses: In general, you use a class when you want a different type for each different behavior, and there's only one sane behavior choice for each type.


You can use GADTs for this. For example:

  data Showable where
    Showable :: Show a => a -> Showable
This allows you to create a polymorphic container like [Showable 5, Showable "hello"] where the polymorphic type is constrained to be a member of the Show typeclass.


Note that GADTs are a bit overkill for this. All you really need is ExistentialQuantification. GADTs are ExistentialQuantification + TypeEqualities.


You can have polymorphic functions, but any given container still contains a single type.

This is the case for all typed programming languages, not just Haskell. So-called dynamic languages like Python, Ruby or Javascript are merely static languages with but a single type[0].

[0] http://existentialtype.wordpress.com/2011/03/19/dynamic-lang...


In C++, if I have a

    std::list<Parent *>
then some of those pointers may actually point at a Child. The client code doesn't care. This is an important kind of polymorphism.

In Haskell, you can't reasonably express this with typeclasses, which surprises folks new to Haskell. You can still express it (as I mentioned), it just takes a different form (and precisely which form is best can vary with other considerations).


I don't know what you consider reasonable, but you can express this in Haskell with type classes and existential types:

   data P = forall a . Parent a => P a
   type PList = [P]
where Parent is a type class.

You have to be a bit more explicit when using a Child in the position of a Parent when adding to the list and you have to use (fully polymorphic!) pattern matches to extract elements from the list, but personally I consider this a good thing as it's more explicit.

Aside: Existential types is what OOP interfaces are -- and interfaces are often encoded as such in language semantics; see e.g. Types and Programming Languages (Pierce).

EDIT: Typos in code -- unfortunately my Haskell is a little rusty :(.


Yes, "Haskell can't reasonably express this with just typeclasses" is probably what I should have said. With the right extensions, Haskell can do anything, but it's not always going to be a good idea...


What's with the weasel words? Existential types aren't even remotely controverisial or dangerous.


In Haskell you don't need to express this with type classes as it is trivially covered by sum types. Type classes are mostly syntactic sugar providing for ad-hoc polymorphism. The real power is provided by the underlying algebraic data types.


Sum types only cover this trivially when the type is closed or when those adding new subtypes can be expected to modify all uses of the type. It's still not the same thing.


The need for an open sum tends to be vanishingly rare in my experience.


It occurs moderately frequently in libraries where a user should be able to define domain specific types along with how the library should treat those types. A classic example would be a raytracer where the user might be adding new kinds of scene elements. It probably shouldn't occur in application code.

For what it's worth, I do think people underestimate the applicability of closed sum types.


I'm aware of the raytracer library example. The solution to that is to not use types to represent shapes. Instead, classify shapes by primitive types (triangles, quads, bezier curves, NURBS, etc.) and use a closed sum for those. It's far less common for a user to want to create a new primitive type and you can always use an escape hatch in the closed sum that allows the user to define their own primitive along with a function to draw it in terms of one of the other primitives.


This thread got silly a while back. I'm abandoning it.


> Of course, it's also more limited in how it interacts with the corresponding values.

This is backwards. Polymorphic values without constraints admit almost no operations at all - you can "copy" them, that's it. This is true to the point that (save diverging) given f :: a -> a, f can only have one meaning.

Invariance of sequence elements is also far less problematic in ML-family languages because they have sums.


"It" here being the fully polymorphic version. Rereading, it does seem misleading (or at best ambiguous) so I've expanded the pronoun.


I see. Sorry, I should have interpreted that more generously.


You're forgiven - certainly calling out the lack of clarity was important!


I haven't really explored this area of Haskell, but I think there are certain cases where this is possible. For example, I think there might be a way to have a list of tuples of `forall a. [(a -> b, a)]`, where a's type can vary, but applying the first element of the tuple to the second will always produce a `b`. I'm not sure if this is actually the case but it seems (theoretically) possible, and certainly would be convenient. More experienced Haskellers feel free to chime in...


Yeah, that's certainly possible. It's also largely frowned upon because it's usually over complex. For instance, in your example

    [exists a . (a -> b, a)]
is literally completely equivalent to

    [b]
as the types ensure there is no other thing that can be done with those pairs.

The convenience factor is thus almost never the case. There are some nice theoretical properties and a great embedding of OO in Haskell via existential typing [0], but it should rarely be used.

[0] http://www.cs.ox.ac.uk/jeremy.gibbons/publications/adt.pdf


OK, since I didn't give a very good example, let me try to show a better one. Let's imagine you're writing a testing library. You have a series of tests; each one of them takes in an input, a function to run on the input, and an expected output.

    data Test i o = Test String i (i -> o) o
and then say your testing function is something like

    runTest :: Eq o => Test i o -> IO ()
    runTest (Test name input f output) = case f input of
        o | o == output -> putStrLn $ name ++ " passed"
          | otherwise   -> putStrLn $ name ++ " failed"
Then let's say you had a bunch of tests. For example, you want to test that addition works:

    test1 = Test "addition" (1, 2) (\a b -> a + b) 3
And you want to test string concatenation:

    test2 = Test "concat" ("hello", "world") (\a b -> a ++ b) "helloworld"
Then you could write your tests as

    doTests = runTest test1 >> runTest test2
Now if you have a lot of tests, it would be nice to put them in a list:

    doTests tests = forM_ tests runTest
However, this would require that every test have the same inputs and outputs. You couldn't do

    doTests [test1, test2]
Even though the resulting type is known (it will be an IO () regardless), and even though runTest will operate on each one, because test1 and test2 have different types, you can't put them all in a list.

I think that `forall` and similar allow you to get around this restriction somehow, but I don't really know how that works.


I mean, I agree such examples exist. I don't think this is yet a truly good example, though. The real advantage to existential types like this are in creation of multiple variants---again I recommend reading Jeremy Gibbons' paper.

But, for completeness, here's how you could write your type

    {-# LANGUAGE ExistentialQuantification #-}
    data Test = forall i o . Test i (i -> o) (o -> o -> Bool) o
Although, note, this is exactly equivalent to `Bool`, although in two ways—if we knew the comparator function was commutative then there'd be just one way to convert to `Bool`.

    testBool :: Test -> Bool
    testBool (Test i fun cmp o) = cmp (fun i) o

    testBool' :: Test -> Bool
    testBool' (Test i fun cmp o) = cmp o (fun i)
But in either case there are no other ways to "observe" the existentially quantified types since we've forgotten absolutely everything besides `Bool`. More likely we would want to also, say, show the input.

    data Test = forall i o . (Eq o, Show i) =>
                Test i (i -> o) o
and this type is now equal to `(String, Bool)`.

    testOff :: Test -> (String, Bool)
    testOff (Test i fun o) = (show i, fun i == o)
So, in general, if you're using existential types you really want to either be using multiple variants or when you have such a combination of observables that it's not worth expressing them all directly.


Somebody has also made a JavaScript library for this: https://github.com/Gozala/protocol


Agreed about raganwald's bias; surprised to see such a naive (IMO) "don't use class hierarchies" post from him.

Your point about "don't access superclass variables" is also spot-on--I don't see how he misses that "self-enforce not calling other objects' properties" (because the language doesn't do it for you) is really not very different than "self-enforce not calling superclass properties".

Per his article, I agree that fragile base classes are a problem, but not every base class is automatically fragile--you can design an API for subclasses (in Java/C# worlds, by being very explicit/thoughtful about what you make private vs protected), just like you design an API for external callers.


Yes! What's wrong in the original article is that the the subclasses have inherited access to all the base classes' internals. There is no reason to allow that at all - the base class should hide it's implementation and allow subclasses to specialise via public methods. One could an interface for this if it was useful - which it would be for a library designer, for example.


Except that you can't do that in JavaScript, which is the language the article is (directly) talking about.

But he takes lessons from a language that has no access control to member variables, and tries to apply them to all OO languages. That's the problem with the article, IMHO.


Yeah, I agree. I didn't make that clear and I should have.


    > This is very useful when you want polymorphism.
    > Writing those delegators is a pain in the butt.
What's a scenario where you'd be exposing a large number of delegators? Reading this made me think - maybe your class hierarchy needs to be abstracted more deliberately into structs-with-interfaces vs domain logic classes. (It's more likely I just haven't thought about the kinds of problems you are thinking of.)


What about something like Java's `AbstractList` and `AbstractSet`? These save you a lot of typing when implementing the huge `List` and `Set` interfaces.


Fair point. The problem is solved in Haskell by having default implementation in the typeclass, leaving the programmer with only a few methods to implement (eg, 'equals' is defined in terms of 'unequals' and vice-versa, implement the one you want to get the rest of the typeclass working).

That said, that's a fairly rare case. Classes with such a large surface are often a code smell.


I think that part of the problem is that while JavaScript can do OO somewhat, it is not fundamentally an OO language, and if OO is the first tool that you reach for, you are likely doing JavsScript poorly.


JavaScript is a dramatically OO language. It's just not the OO you're used to.

Everything* in JS other than numbers, strings, and booleans are objects. Functions are objects.

See http://www.objectplayground.com for details. (Temporarily down due to server problems, but hopefully back up soon.)

*Not really everything. Not objects: undefined, null, number, string, boolean. Objects: object, array, regexp, function, everything else.


What definition of OO doesn't include encapsulation?

And encapsulation in JS is fundamentally broken - everything is public.


Yes! I most emphatically agree with this. I do a lot of OO programming in Java or C++ but I think its a horrible idiom for JS. Although I also think prototypical inheritance in Javascript is also pretty ugly :o) For me, JS seems to work best when treated like a poor man's functional language using libraries like underscore.


> For me, JS seems to work best when treated like a poor man's functional language

I agree.

Others have replied (rightly) that under the hood, JavaScript has a lot of objects. Functions are objects. So in that sense I am wrong: JavaScript is an OO language.

However the experience of programming well in JavaScript feels more like using a functional language than using a OO language. Good JS has a lot more to do with thing such as passing functions to functions or understanding how " fn().then(fn()) " works, than it relies on class hierarchies, protypical or classical.


Javascript is fundamentally an object-oriented language, it's just prototype-based rather than class-based.


We do something like classes in JavaScript without touching prototypes, and it works pretty well as a natural way of organizing and encapsulating code.

Private members are variables/functions defined within the constructor's closure. Public members are properties added to "this" by the constructor. Mixins can be done by calling another class's constructor on yourself.

The point is that class-based OO can be trivially imposed on JavaScript objects without abandoning the native object construction mechanism like ember.js does. In fact, CoffeeScript does this in order to implement its own classes.


It seems more like a functional language to me, but YMMV: http://stackoverflow.com/a/501053/5599


You seem to be under the impression that those are somehow mutually exclusive.


I'm sorry if you though so. I am not under that impression.

However as mentioned elsewhere ( https://news.ycombinator.com/item?id=7500280 ) if most of what you do to organise your code involves passing functions to functions and very little of it involves creating class hierarchies or prototype chains, it's a fair assessment that the language that you are using is more functional than OO.

The language that you are using may be a subset of the whole language, but with JavaScript that's given - you have to find the good parts or go mad trying. I was wrong about JavaScript as a whole, but maybe less so about JavaScript as it is successfully used.


It's both functional and object oriented. Also, see Scala for good combination of functional and OO.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: