Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Some ideas that are ubiquitous within functional programming are certainly on the rise, for example:

- functions as first-class entities in programming languages, and consequences like higher-order functions and partial evaluation;

- a common set of basic data structures (set, sequence, dictionary, tree, etc.) and generalised operations for manipulating and combining them (map, filter, reduce, intersection, union, zip, convert a tree to a sequence breadth-first or depth-first, etc.);

- a more declarative programming style, writing specifications rather than instructions;

- a programming style that emphasizes the data flow more than the control flow.

I see these as distinct, though certainly not independent, concepts.

I’m not sure whether functional programming itself is really on the rise, not to the extent of becoming a common approach in the mainstream programming world any time soon. I don’t think we’ve figured out how to cope with effectful systems and the real world having a time dimension very well yet. (I don’t think we’ve figured it out very well in imperative programming yet either, but the weakness is less damaging there because there is an implicit time dimension whether you want it or not.)



Yes, a lot of these are characteristic of functional programming, and many are being adopted. But I think that the idea of using mathematically pure functions---programming without side effects or mutation---is a/the key idea behind functional programming. And it's this purity that divides the communities. You can take high-order functions and folds and put them in just about any language, and you could put OOP concepts like subtype polymorphism into functional languages. But there's a line that neither class of languages can cross over, and that's mutable state.

I know that some developers are beginning to lean in the direction of functional programming by relying on const annotations and adopting a functional style. And I'm very excited and hopeful about the overall trend.


But I think that the idea of using mathematically pure functions---programming without side effects or mutation---is a/the key idea behind functional programming.

Indeed, and it is either a blessing or a curse depending on what kind of software you’re trying to write and the sort of situations you’re trying to model.

For most of the programming work I’ve ever done myself, what I’d really like to use is languages that do have concepts like time and side effects, because they’re very useful, but which only use them in controlled ways and when the programmer actually wants them. I’ve often wondered whether such ideas might be elevated to first class concepts within future programming languages, complete with their own rules and specified interactions, just as today we have variables and types and functions, and a language will prevent errors like calling a function with only two values when it expects three or trying to use the value 20 where a string is expected.

For now, I’m hoping that functional programming, in the pure sense, will raise awareness of the potential benefits of controlling side effects rather than allowing them more-or-less arbitrarily as most imperative languages today do. With a bit of luck, the generation of industrial languages in a few years will then borrow ideas that stand the test of time from today’s research into type/effect systems, software transactional memory, and so on, and perhaps we can have something of the best of both worlds.


What you describe is exactly what Haskell does. You have pure functions by default, but you can have mutable state wherever you want it--it is just constrained to a particular scope by the type system. That is, you can use mutable references, mutable arrays and so on all you want inside a computation, and the compiler will guarantee that none of these references leak outside their scope. Haskell accomplishes this with "state threads" (ST[1]).

[1]: http://hackage.haskell.org/packages/archive/base/4.2.0.1/doc...

Similarly, Haskell has the IO type for containing any side effects at all. This type, as the name implies, can have not only mutable state but also arbitrary I/O--really anything you like.

This has some very nice advantages: for example, you can create your own wrapper around the IO type which only exposes some IO operations. You could use this for ensuring that plugins can't do things like delete files, for example. (This is one of the example use cases for Safe Haskell[2], which is worth reading about.)

[2]: http://www.haskell.org/ghc/docs/7.4.1/html/users_guide/safe-...

You can even reify time explicitly, rather than modelling it implicitly with mutable state. This leads to functional reactive programming (FRP[3]), a new and more declarative way to write things like UIs. FRP replaces event and callback oriented programming, fixing much of the spaghetti code common to most UI frameworks.

[3]: http://stackoverflow.com/questions/1028250/what-is-functiona...

Really, I think "purely functional" is a very unfortunate term for Haskell from a marketing standpoint. "Purely functional" does not mean that we cannot have side-effects, or mutation, or what have you. It just means that side-effects aren't everywhere by default, and don't implicitly permute your otherwise pretty code.


I appreciate the comment, but what I have in mind looks very different to the way monads are used in Haskell today.

For example, I don’t necessarily want pure functions by default in my idealised programming model. To me, a pure function is just a special case of a function whose effects are controlled and where the resources the function might interact with are readily identifiable, for which the sets of possible effects and interacting resources happen to be empty.

Put another way, I don’t see anything wrong, or even inferior or undesirable, with performing a quicksort of an array or some fast matrix computations destructively. However, I’d like my language to make sure I couldn’t accidentally try to use the old data again afterwards (including, for example, in other threads running in parallel). I’d like to be sure that just because my algorithm updates one set of data in-place, it doesn’t also draw a window, block to wait for user input, or kick off a background thread that will reformat my hard drive in half an hour. And in the calling code, I’d like to find out very quickly that just because I created and populated an array over here, and then ran a quicksort that changed it over there, and then printed it out down there, nothing else had any chance to modify it in between.

And I’d like to do all of that using syntax that is at least as clean and readable as today’s best languages, please, not the horrors that often seem to result when functional programming languages try to implement equivalent behaviour with realistic heavily structured/monadic values instead of the neat one-integer counter or append-only log that always seems to be used in the tutorial code samples.

I suspect that to do that will require a language that provides dedicated, specialised tools to support the new programming model. Sure, you could probably do this sort of thing using monads in Haskell, just as you could do OOP in C, text processing in C++, or write in a functional style in Python, but the results are clumsy compared to using the right tool for the job. I’m not sure that tool exists yet for the kind of programming models I’d like to use, but that’s where I’m hoping the lessons of today’s functional languages and academic research will bear fruit over time.


That's very attainable in Haskell and is often well-modeled off the ST monad. You also have a number of "Data types a la carte"-like methods which allows you to build very specific effect contexts in order to precisely guarantee things like the fact that mutable array code never produces events in the UI. These can be achieved by products and compositions of applicative types, free monads, monad transformers, or the "sum of effects" "a la carte" method.

Another big deal that comes out of these more specific effects is that you can have the compiler produce stream fusion to automatically optimize some inner loops, though I don't know how often that actually happens as of today.

(Oh: if you want to play with "Data types a la carte" they're in the IOSpec package, http://hackage.haskell.org/package/IOSpec and are described---somewhat densely---in http://www.staff.science.uu.nl/~swier004/Publications/DataTy...)


You should really take a look at Haskell. What you describe in your 3rd paragraph is precisely the ST monad.

It's true we're still pretty limited in how effects are typed (most effectful stuff still ends up in the "IO sin-bin"), but this is an area of active research and experimentation. This is the way haskell's going, and no one else is even close (afaict).


What you describe in your 3rd paragraph is precisely the ST monad.

Not exactly. The ST monad supports local mutable state, essentially turning a computation that uses state internally into something that looks pure from the outside. I’m looking for something more general where, for example, a function could modify state outside its own scope, but only if it advertised explicitly which stateful resources it would read and/or write, so the language could enforce constraints or issue warnings if various kinds of undesirable and/or non-deterministic behaviour would result.

And really, I want to generalise the concepts of resources and effects on them much more widely. A stateful resource might be a mutable variable, but it could just as well be a file, a database, a single record within a whole database, or anything else I care to model in that way. An effect might be reading or writing part of the state, but it might also be opening or closing a file, or beginning, committing or rolling back a DB transaction. I want a language that can understand the concept that a file must not be read or written before it is opened, or that a database transaction must always be committed or rolled back once it has begun, and warn me if any code paths could permit an invalid order of effects. I don’t just want to throw half my code into the IO monad and hope for the best.

I want a language where I can pass a reference to a large data structure into a parallel map function and get a compile-time error because, somewhere in the function I was applying using the map, there was an attempt to perform a mutating operation on that shared data without sufficient safeguards. But I want to write almost identical code and have the effects automatically co-ordinated because the language understands that concurrent access is possible so synchronisation is necessary, but the spec for the parallel map says that non-determinism is permitted.

And crucially, I want a language where even simple code that should look like this:

    def fib(n):
        if n < 2:
            return n
        x, y = 0, 1
        for i in range(n - 1):
            x, y = y, x + y
        return y
does not wind up looking like this (from [1]):

    fibST :: Integer -> Integer
    fibST n = 
        if n < 2
        then n
        else runST $ do
            x <- newSTRef 0
            y <- newSTRef 1
            fibST' n x y
     
        where fibST' 0 x _ = readSTRef x
              fibST' n x y = do
                  x' <- readSTRef x
                  y' <- readSTRef y
                  writeSTRef x y'
                  writeSTRef y $! x'+y'
                  fibST' (n-1) x y
[1] http://www.haskell.org/haskellwiki/Monad/ST


Your last complaint is not fundamental to the language: it's just a matter of library design. I guess Haskellers don't like state very much in any guise, so having awkward syntax for it is only natural. Both programs are fundamentally ugly, so having it be superficially ugly as well is no big loss.

However, if you really wanted to, you could have almost C-like syntax[1]. The demo in that particular blog post has its own issues, but there is no fundamental reason why you couldn't apply some of those ideas to the normal ST API. I suspect that people simply don't care enough.

[1]: http://augustss.blogspot.co.il/2007/08/programming-in-c-ummm...


Both programs are fundamentally ugly, so having it be superficially ugly as well is no big loss.

Perhaps we’ll just have to amicably disagree on that point. I think making code superficially ugly is a huge barrier to adoption that has held back otherwise good ideas throughout the history of programming.

Many an innovative programming style or technique has languished in obscurity until some new language came along and made the idea special and provided dedicated tools to support it. Then it becomes accessible enough for new programmers to experiment with the idea, and the code becomes readable enough to use the idea in production, and over time it starts to change how we think about programming. We push the envelope, probably developing some ingenious and useful but horrible abuses of the idea along the way, until we figure out how to make those new ideas available in a cleaner and more widely accessible way and the cycle begins again.

I suspect that people simply don't care enough.

Most people are programming in languages that don’t have the problem in the first place, so they don’t need to care.


> I think making code superficially ugly is a huge barrier to adoption that has held back otherwise good ideas throughout the history of programming.

Haskell tries to make elegant code look nice and inelegant code look ugly. When you see something ugly like the example above this is a sign that there would be some more elegant way of writing it. That certainly holds in this case as you don't need an mutation to do the fib sequence.


For something as simple as fibonacci, you wouldn't use the state monad machinery, but go directly:

    fib' :: Integer -> Integer
    fib' n = iterate (\(x,y) -> (y,x+y)) (0,1) !! n
The state monad helps you keep more complicated code composable. But it does have some syntactic overhead, that makes it lose out in simple examples. (Though with a bit of golfing, you could get syntactically closer to the Python example. For example putting the tuple (x,y) into an STRef instead of having two of them, employing Applicative Functor style in some places, and using a loop-combinator from the monad-loops package.)


Concepts like your file state machine can be achieved using indexed monads and dependent types, though that's still fairly black magic. There's no doubt that the IO monad should have more structure, though, and ST is just one way to achieve that. There isn't a globally recognized standard for providing more structure, but there are a lot of fairly common practices (see IOSpec or free monads).

The syntactic convenience is unbeatable, of course. Monads are only first class syntactic notions up to `do` notation. Someone is bound to say "let just implement that in Template Haskell", but nobody really argues that TH macros are the same as lispy ones.


The syntactic convenience is unbeatable, of course.

Exactly.

I’m not in any way criticising all the research and ideas coming out of the functional programming community. Playing with Haskell is driving all sorts of interesting ideas.

I’m just saying I don’t think the mainstream is going to give up a whole bunch of readily accessible and useful programming techniques any time soon, just to gain the benefits of a radically different programming style that sound rather theoretical to Bob the Developer. This is true even if the reality might be that in time Bob’s bug rate would drop dramatically and he would develop new features in a fraction of the time.

I think it’s far more likely that the more accessible and obviously useful parts of that radically different programming style will get picked up over time, and that the underlying general/theoretical foundations will be more useful to the people building the specific/practical tools than to the people using them.

Today, we’ve been discussing how to implement even quite simple effect-related concepts in a functional programming context in terms of (borrowing from a few of your posts in this thread) products and compositions of applicative types, free monads, monad transformers, indexed monads, dependent types, and the problems of uncertain denotation. Poor Bob just wants to get an error message if he forgets to commit his transaction and leaves the DB locked or if he might be dereferencing a null pointer in his network stack. :-)


That's utterly fair. I don't expect Bob the programmer to adopt inconvenient things. I think having static assurances of the like you were asking for will probably never quite be in Bob's purview either, though.

To clarify, I don't think Haskell is the ultimately useful embedding of many of these concepts. I just also don't think there's that much hand holding possible. You start to run up against the limits of inference and decidability in this space. Until Bob the programmer start to meaningfully think about the way to build the effects of his language around different contexts then there's a limit to how many guarantees a compiler is going to be able to make.


What you describe reminds me of Rust's typestate, which they removed. But the "branded types" replacement, combined with the unique pointers, sound promising - see http://pcwalton.github.com/blog/2012/12/26/typestate-is-dead...

I am new to Haskell, but I also have objections to Haskell's notions of purity. For example, accessing your pid or your argv do not have side effects, and certainly do not perform IO, yet these are grouped in the IO monad next to file deletion.


Runtime constant values as IO is a long-standing concern. The existence of withArgs cements the issue (definitely allows for non referentially transparent code to be written with getArgs) but there's more to be said about why withArgs exist and why getArgs is in the IO monad.

http://www.haskell.org/pipermail/haskell/2005-October/016574...

The argument usually seems to play out as "their denotation is uncertain" and therefore they get unceremoniously thrown into the IO monad. I tend to agree with that in that I feel a major point of purity is that it gives you a world sufficiently far from the reality of the RTS to play in.

IO's general status as the "Unsafe Pandora's Box" is a wart in Haskell. I'd much prefer it be called the RTS monad and have something like IOSpec's Teletype type performing the classic IO. It's fixed by convention now though.


IO's general status as the "Unsafe Pandora's Box" is a wart in Haskell.

Isn’t that rather like saying mutable variables are a wart in C? :-)


It's more like saying void pointers are a wart in C. It's appropriate in two ways: for one, void pointers are a wart, and for two, there's no good way to get rid of them without radically changing the language.


Sure. Haskell's wart is automatically verified to be context constrained by the type system, though. There are also lots of tools to mitigate that wart that are also verified.


I agree. Tight, stateful algorithms aren't going to disappear and are harder to reason about (currently) in functional languages. I expect that their implementation and use will shrink over time, in the same way that embedding inline assembler has all but disappeared.

I definitely don't expect, or hope, that monads are the final answer to isolating state manipulations, and I agree with a later comment of yours that the IO monad is not granular enough as it is in Haskell. When a large chunk of code ends up being written in the IO monad or in another monad that wraps IO, you miss out on a lot of safety and reasonability that Haskell is supposed to give you. And I've seen that happen (my own projects included).


"But there's a line that neither class of languages can cross over, and that's mutable state."

I'd add another qualifier for clarity; default or reliable mutable/immutable state. And this really comes up in libraries and such; either the language affords the use of immutability and you can count on libraries being themselves immutable, or the language affords the use of mutation and you can count on the libraries being based on mutation. Immutable languages can "borrow" mutability and mutable languages can "borrow" immutability (const, etc.), but the default matters, and there has to be some sort of default.

(Though while I cast this as a binary, there are some small in-between choices that may be relevant; see Rust, for instance, whose "unique" pointers may be a successful split. If the most useful aspect of immutability is not strictly speaking the inability to mutate values, but instead to guarantee that values can not be mutated by other threads or functions you didn't expect, then what happens if you directly express only that constraint instead of the stronger and more annoying constraint of full immutability? I'm watching that language with interest.)


To me, the future will most likely be languages that allows both functional and OO styles to interoperate. Programmers will pick the style or mix of styles most appropriate to the particular sub-problem they're solving.

We already do this with some of our high-level languages like Ruby and JavaScript. With these, we have higher-order functions, map and friends, closures, etc.. But we also have our familiar OO constructs. Almost every program I write in these languages uses all of the above, not just the functional or OO subset.

I so far have not seen any practical advantage in going purely functional. I've tried it a number of times, but I always find that the real-world programs I write need to be stateful. Yes, functional languages do of course have facilities for handling state, but they always seem super awkward, especially compared to the elegance of state handling in OOP.

For example, consider a simple, 1980s-style arcade game. There are a bunch of entities on screen, each with attributes like velocity, health, etc.. How do you maintain this state in a purely functional language? I've seen various suggestions, but they all seem to boil down to setting up a game loop with tail recursion, and then passing some game state object(s) to this function on each recursion. Doesn't sound so bad, but what happens when you have a bunch of different types of entities? E.g. player characters, monsters, projectiles, pickups, etc..

Well, every time you add a new type of entity (or a new data structure to index existing entities), you could add another parameter to the main game loop. But that gets crazy pretty fast. So then you have the clever idea to maintain one massive game state hash, and just pass that around. But wait, now you've lost something key to functional programming: You can no longer tell exactly what aspects of the game state a function is updating, because it just receives and returns the big game state hash. You don't really know what data your functions depend on our modify. Effectively, it's almost like you're using global variables.

I'm using games as an example here, but the same sorts of problems come up with almost any stateful application.

This is why I prefer languages that allow you to seamlessly mix functional and OO styles. They give you many of the benefits of FP without forcing you to deal with the difficulties described above.


For games and other reactive systems, you should check out functional reactive programming (FRP)[1]. The basic idea is to model time explicitly, working with time-varying values. So you would express your game logic as a network of event streams and signals.

[1]: http://stackoverflow.com/questions/1028250/what-is-functiona...

This is a radically different from the normal imperative approach, and I've found it to be much nicer. Admittedly, I haven't tried making games per se, but I have used it successfully for several other interactive UI programs.

FRP is a good approach for any system where you would normally use events and callbacks. This includes UIs and games as well as things like controllers or even music. So at least for that sort of IO-heavy and stateful domains, there is a very good declarative approach.


FRP is super nice, but no magic. I found that it makes it natural to decompose the codebase in model/view/controller and encourages an explicit state machine for updating the model. At the same time, a time-varying value is a plain old observable. But the discipline enforced by FRP avoids the spagetti trap most event-based systems eventually fall into. No magic, therefore useable my mere mortals :)


Now that is really cool. You've convinced me that this could be a viable and practical approach to handling highly stageful programs in an FP context.

One question though: Does memory/storage become an issue if you're keeping track of values "over time?" If I understand it correctly, you'd have a constantly growing picture of your data as it has evolved, with a complete history of prior values. (Maybe I'm wrong about this part, though.) For applications that are long-running or handle a lot of data, could this be a fatal problem?


You're not necessarily keeping track of all the old values over time. Rather, the core idea is that you program in terms of abstractions that are explicit about time. That is, you write your program in terms of streams of events or signals.

You have signals and events, but you never ask about the value right now; instead, you take these two abstractions and combine them in different ways to get a reactive network. In a way, it's similar to dataflow programming or circuit design: you connect up "wires" into a network that can constantly push through data. Depending on how you write this network, you will need to remember different amounts of past data at runtime.

If you write your event/signal network carefully, you do not need to keep a history of too many prior values at runtime. This is one of the things modern FRP libraries really try to help you with: writing networks that are efficient. At least the ones I've tried are good at this--I haven't had many space problems in my programs so far.

In summary, this is a potential issue, and you may have to be a little careful in how you write your FRP code. However, modern frameworks try to make it easy to avoid these pitfalls, and there is no fundamental reason you can't use memory efficiently with this model.


I'm pretty new to FRP, but I tend to visualize FRP like an Excel spreadsheet where one value change leads to a chain reaction of many values in the spreadsheet.

Would this paint the right picture?


Very well. In fact, one of very first implementations I've ever heard of was done in Lisp and it was described exactly in this way. I think the library is even called "Cells".


For an example of making Pong using FRP, check out this: http://elm-lang.org/blog/games-in-elm/part-0/Making-Pong.htm...

It's not a very long read, and it will help make FRP more concrete with a practical example.


Well, if the game state is so small, then it's quite easy to combine all these functions together in FRP style.

How about big games though? for example, World Of Warcraft or Eve Online. The game states of these are humongous, and the game state's data constructor would be huge.


> Well, every time you add a new type of entity (or a new data structure to index existing entities), you could add another parameter to the main game loop. But that gets crazy pretty fast. So then you have the clever idea to maintain one massive game state hash, and just pass that around. But wait, now you've lost something key to functional programming: You can no longer tell exactly what aspects of the game state a function is updating, because it just receives and returns the big game state hash. You don't really know what data your functions depend on our modify. Effectively, it's almost like you're using global variables.

You have to explain how that works. By "hash", do you mean a finite map (or dict, in Python terms)?

In Haskell you can use a state monad for these game loops. Some game industry insiders (see http://lambda-the-ultimate.org/node/1277) argue explicitly for this style.


A state monad is a decoration of functions that take and return the state. It's certainly prettier, and there are respects in which the parent's complaints are overblown (certainly, it's no worse than imperative languages), but the parent is entirely accurate that a function like

    frobnicateTurboencabulator :: GameState -> GameState
doesn't have a type that tells you anything about what it actually does. Compare that with something like

    on :: (b -> b -> c) -> (a -> b) -> a -> a -> c
where the type tells you basically exactly what the function does.

The state monad doesn't help with this problem:

    frobnicateTurboencabulator :: State GameState ()

Of course, the solution is is some combination of:

1) Split functionality into many typeclasses and hide the State monad, so you can see what functionality is accessed by a function (as mentioned in my other comment),

2) Have functions return a description of updates to the world-state instead of performing those updates themselves (so you know what kinds of things a particular function might do). This might have the benefit of letting you compose actions before applying them, which could in some cases be faster if the updates are likely to be large.

There are likely other options, as well.


> Of course, the solution is is some combination of:

And as always in Haskell: more type trickery. Like lenses, which help with the splitting. Applicative functors might help with "Have functions return a description of updates to the world-state [...]" for free in a sense similar to http://gergo.erdi.hu/blog/2012-12-01-static_analysis_with_ap...


Lenses are awesome, and can help with pulling data out of a WorldState object and updating it thereafter. They do not do a thing about loss of information in the function signature.


You hit the nail on the head with regarding to pure functional style. What I usually find useful it to use OO at high level, macro level, structural level to organize the code, and to use functional style at the lower level to build reusable functions. The lower level basic pure functions form the building blocks that can be reused in different circumstances since they are stateless. The domain specific and state specific aspects are left to OO to abstract and organize.


If you mix FP and OO you'll often end up with terrible FP which simply reproduces the "old" approach: lots of mutable stuff everywhere. Many Clojure toy games are like that: they start with a lot of "variables" in mutable refs.

But it doesn't need to be that bad: you can create a game that is fully deterministic. A game which is purely a function of its inputs. And it can of course be applied to more than games.

The problem is that it feels "weird" to non-FP people.

The state monad is definitely what you're looking for. If you use a state monad approx. 95% of what you just wrote is utter rubbish.

But the state monad (and the maybe monad) sadly aren't that trivial to understand.


> If you mix FP and OO you'll often end up with terrible FP which simply reproduces the "old" approach: lots of mutable stuff everywhere.

What I'm suggesting is that mutable state isn't inherently bad. It's entirely possible that we will someday enter a new age of mainstream, pure FP where we look back at mutable state and cringe. But that's pure speculation. Our current world is full of successful applications written in languages built on mutable state.

So I'm not saying that mutable state is inherently better than the alternatives. Just that the case against it--and for the alternatives--has to be pretty darn compelling to outweigh the tremendous real-world success of languages like Ruby, Python, JavaScript, and so on.


I wouldn't call what you've written a straw-man argument, but neither is it a steel-man...

My biggest takeaway, and an entirely legitimate criticism, is that if all your functions become instead State World (), your type signature no longer gives you much information about what the function does. In light of this I intend, when/if I get around to playing with a game in Haskell (or anything else that involves shipping around a huge wad of state), to look into splitting types of game state changes into multiple typeclasses, so as to see about preserving some more information about what various state transformations touch (both reading and writing)...


The maybe monad has a counterpart from the OO world: the null object pattern. They're not implemented the same way, but they serve the same purpose.


What is your third point if not functional programming?


Functional programming, by definition, is expressing a program as a function to be evaluated. That implies that, at least to some extent, all functional programs are written in a declarative style.

However, we use a declarative style for other kinds of language as well: Prolog, SQL and CSS are three very different examples, none of which is a functional programming language similar to ML or Haskell or Clojure.


I don't think there is one uniform definition of functional programming. It's a vague characteristic of a language in the same sense as "object oriented."


> I don't think there is one uniform definition of functional programming.

Only because people are lazy in their words and thinking.


I agree people never evaluate the function early enough, glad I'm not one.


declarative vs imperative is kind of a subjective thing but functional programming is not the only way to do declarative programming (logic programming and SQL are two examples that come to mind)


I tend to agree. It's a lot like with object-oriented programming: "Hey, this is so awesome! You can encapsulate data, have well-defined interfaces, enforce separation-of-concerns, and, and ..." --> Er, I've always been able to do all that, I just didn't call it OO.

That said, I'm still doing what I can to assimilate those techniques and understand where to apply them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: