Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What is a functional programming language? (enfranchisedmind.com)
66 points by e1ven on July 23, 2009 | hide | past | favorite | 33 comments


I found this post to be a poor attempt at shoehorning the notional of functional languages into lambda calculus. Sure, functional programming is inspired by lambda calculus, but it doesn't reduce to it, and, what's important, never has - from the very first LISPs on.

So, for example, Church's encoding of naturals is introduced, which is totally irrelevant to all practical functional languages. Despite what the author is saying, using computationally sane integers is not "nothing more than an optimization", but a fundamental part of the language (and don't get me started on floats). It's just that actual functional programming languages model a lambda calculus with atoms - basic objects, like integers, that are not functions. It's really OK - not everything needs to be a function! The fact that it's possible, in pure lambda calculus without atoms, to model integers using Church's trick is cool and has theoretical significance - but no relevance at all to functional languages.

Similarly, functional programming has always included languages with mutable data. LISP was mutable from day 1, and so was the even more functionally-inclined Scheme. There's nothing in "functional programming", as has been practiced for decades, that necessarily excludes mutable data. Now, "pure lambda calculus" is another story... but it really is another story, one that serves as a motivation for functional programming rather than all of its being.

The requirement that a functional language, in the author's opinion, must have tail call optimization, is weird (many LISPs don't have that), and its explanation is weirder yet: a functional language must have TCO because it's based on lambda calculus where everything is done with recursion, so you need infinite stacks (???), but real stacks aren't infinite, so "you need tail call optimization to allow the use of recursion for looping". Well, what if you don't use recursion for looping, just as you don't use Church's encoding for integers, because you are not after all writing in lambda calculus?

Overall, I felt that the author let his intuition of what a functional language could be settle too firmly on Haskell (with the understanding that static typing is optional), and is trying to redefine functional programming in accordance with this vision.


Lisp was by all accounts an attempt to make a working lambda calculus[1]. However the machines weren't very quick then and speed was a real issue. Even later on when Scheme was designed, speed was an important consideration, and it probably did affect the resulting language.

What the author was saying about TCO was that, based on the fact that all our machines today use stacks, and not wanting to have programs that break(!), any implementations of lambda calculus on these machines must have TCO to allow as much of the lambda calculus as possible to be available in the programming language, lest you reject otherwise correct programs.

But make no mistake, Lisp[2] and functional programming languages ARE based on the lambda calculus. The article really did hit the nail on the head.

--

[1] "Lisp was originally created as a practical mathematical notation for computer programs, based on Alonzo Church's lambda calculus."

-- Wikipedia (http://en.wikipedia.org/wiki/Lisp_%28programming_language%29)

(and lots of other sources agree with this)

[2] Sorry to say it, but Lisp isn't the center of the Universe. It's a cool trick, but it isn't that great.


And quite relevant to this topic, our compilers and computers have (since the invention of Lisp and Scheme) gotten hugely better to the point where we can make much closer approximations to the lambda calculus such as the inclusion of lazy evaluation and immutability.


I'm not sure I agree with this first statement. For sure scheme was motivated by lambda calculus, but I do not believe McCarthy (a functional analyst by training) was aware of lambda calculus when he invented Lisp. Rather he was motivated by symbolic calculations often performed in functional analysis.

NOTE: I stand corrected, I just checked and McCarthy's Lisp paper does cite Church's thesis work. Somehow I had thought Algol was motivated by lambda calculus


interestingly, I believe scheme was a failed[1] attempt at building OOP

[1] of course a closure is an object with exactly one method, apply


No, it was an attempt to implement the Actor model in Lisp. The Actor model isn't what people usually have in mind when someone says OOP - it's more like Erlang or Termite.


Sorry to keep replying like this, but I'm just utterly dismayed by your upvotes. To put it bluntly, you are wrong (particularly paragraph 1 sentence 1 and paragraph 3, mutability).

I'm not sure how you have arrived at Lisp being your ad hoc definition of functional programming, but it frightens me.

The main issue here is that the article isn't saying that only the lambda calculus is functional programming but it is saying that the whole paradigm is based on the lambda calculus and so to determine how functional a language is, you determine how similar it is to the lambda calculus.

Wikipedia on functional programming:

"Functional programming has its roots in the lambda calculus, a formal system developed in the 1930s to investigate function definition, function application, and recursion. Many functional programming languages can be viewed as embellishments to the lambda calculus."

http://en.wikipedia.org/wiki/Functional_programming


Feel free to reply as many times as you'd like; however, you're a bit confused. You haven't read my comment attentively - Lisp is not my ad hoc definition of functional programming, and nothing in my words indicates that. I just used Lisp a few times as an example to show that the original post's idea of what a functional language is is too narrow and unreasonable.

You're saying in one paragraph that the paradigm of functional programming is "based on" the lambda calculus, and to support that you're citing wikipedia as saying it "has its roots" in the lambda calculus. These are two very different things. You can have roots in something with, and yet also contain other crucial ideas. Functional languages are like that: their basic syntax is inspired by the lambda calculus to a significant degree, but they don't reduce to it.

It's a mistake to assert, as you do so confidently, that how functional the language is depends on how similar it is to the lambda calculus. There is no Council on Functional Languages that ever decreed it to be the case. What "functional programming" and "a functional language" means is relatively messy, as most things in computing, and is determined by how people have been using these words, more or less consistently, over the decades. If you look at that, you'll discover, for example, that while having functions as first-class citizen was always a definite requirement of functional programming, neither immutable state nor TCO ever were.

It seems to me that, speaking generally, many people learn about the lambda calculus and get their minds blown. Lambda calculus is a very beautiful thing, but it did not invent passing functions to functions, returning functions as results, everything if a function, etc. That started four decades before with Russell & Whitehead's theory of types (numbers, first-order functions, second-order functions that act on first-order functions, third-order functions and so on) and was commonplace by Church's time. Church's contribution was a particularly simple and economic system of notation that used just one operator (the lambda) to climb the tower of types. Another ingenious notational idea was to use simple juxtaposition as function application. Functional languages later used these two ideas to great effect, but they never reduced to these two ideas and nothing else. What about conditional execution? Named variables? Lists as a fundamental structure? Static typing? Sure, you can simulate most of these in vanilla lambda calculus, just as you can simulate numbers with Church's encoding. But how is that any different than saying you can simulate loops and subroutine calls on a Turing machine, and therefore these are "just optimizations"?

To sum up: the common underlying theme for functional languages, if you look at them, has been freedom to use functions as values everywhere values are used, freedom to compose them at will, and freedom to easily define new ones. The syntax of working with functions and defining them has usually been inspired by the lambda calculus (though see Dylan for a weird exception). Being as close as possible to the lambda calculus is something that's never been a goal of functional programming as a whole, though recently some languages have gotten surprisingly close to it. Having features incompatible with or orthogonal to the usual understanding of lambda calculus has always been OK. TCO is an issue for language zealots to fight over, not a serious contender for what is or isn't functional.


It's a mistake to assert, as you do so confidently, that how functional the language is depends on how similar it is to the lambda calculus.

The article agrees with this assertion, Wikipedia agrees with this assertion and I agree with this assertion. Why is it a mistake to assert this? Granted you point to earlier type theory which I was unaware of, but my claim wasn't that lambda calculus was original.

If you look at that, you'll discover, for example, that while having functions as first-class citizen was always a definite requirement of functional programming, neither immutable state nor TCO ever were.

Here is another big gripe of mine. Given that there are practical reasons for the earlier functional languages to have been different from the lambda calculus (speed and memory), why continue using those languages as metrics later on when the limitations of speed and memory no longer apply?


Modern functional programming languages are no more similar to lambda calculus as modern imperative languages are similar to Turing machines.

In lambda calculus, functions are declared and manipulated. Data can be encoded as a function and vice versa. Sound like any functional languages you know (cough Lisp cough)?

In Turing machines, cells on the tape (or variables) are marked (or declared) and manipulated. But try shoehorning OOP into this model. It's a model. It's may not capture all the details.

It's like how modeling the crash safety of a car is similar to idealized theoretical physics. Sure, safety systems differ widely across cars, but ultimately, we have models that put limits on how likely a person is to survive a crash. The model isn't perfect, but it captures the essence of the differences of the two languages.


I think your comment would be perfectly reasonable if you were trying to argue against a rigid definition of a _Functional Language_. But I think it's off base to say 'There's nothing in "functional programming", as has been practiced for decades, that necessarily excludes mutable data'.

Functional programming DOES NOT PERMIT mutability or side-effects. Lisps such as Scheme may lend themselves well to a functional style, but mutability is not an element of functional programming. I could write in a functional style in C as well, by AVOIDING mutability.

If we're just splitting hairs about "how functional" a language has to be to be considered a "Functional Programming Language" then you're welcome to label as you like. But if you are suggesting that looping without recursion (i.e. with mutable variables) is in any way Functional, then you are just wrong.


Functional programming DOES NOT PERMIT mutability or side-effects.

I think the word you don't realize is missing there is "Pure".

Pure functional programming, and pure functional style, is everything you think functional programming and functional style are. Purity is precisely the technical requirement that constrains a function to not have side-effects and to not mutate data. If, as you say, functional programming did not permit mutability, the term "pure functional programming" would have been meaningless.

But functional programming is not, and has never been, solely pure functional programming; and despite the positive connotations that the word "pure" commonly evokes, functional programming with mutable data is not some kind of inferior version of pure functional programming, a poor distant relative fresh from the provinces come to beg favors from a wealthy merchant. No. This belief commonly stems from ignorance of the history of functional languages coupled with wide-eyed admiration of an especially elegant pure functional idiom recently learned.

Some people believe that the purer the better (this belief has only gained popularity relatively recently, in fact), others think that mutable state is just fine where it helps rather than hinders. Both kinds of people happily write functional programs using functional programming languages.


Really enjoyed the explanation of how addition falls out of the lambda calculus, as I'd never seen that before.


You can also do sequencing (assuming applicative order) and conditionals:

  'a; b' ==> (fn x -> b) a

  true  = fn x -> fn y -> x
  false = fn x -> fn y -> y
  if    = fn p -> fn -> c -> fn a -> p c a


I really enjoyed the distinction between languages based on turing machines and languages based on lambda calculus. I'd had a fuzzy idea of the distinction before, but this article set it out clearly.


okay, here's the argument as I read it: 1) functional languages are made of lambdas a) when they aren't made of lambdas, it's "just an optimization." 2) non-functional languages aren't made of lambdas b) when they have lambdas, it's just "style"


Try this:

Functional programming languages have an underlying model based on function composition (with one argument functions, etc.). To the extent that they have arguments with multiple functions, etc., it's ultimately just syntactic sugar and optimizing for common cases - While nobody generally writes the low-level "lambda calculus virtual machine language" stuff by hand outside of academic papers or bootstrapping, Scheme's "let" is implemented in terms of lambda at compile-time. (Scheme without set! is representative.)

Imperative languages have a fundamental underlying model of cells in memory and sequential operations that modify them, branch based on them, etc., including an instruction pointer specifying where instructions are currently read. (C is representative.)

Intuitively (and without busting out the Greek letters): compare languages for which functions are a list of expressions, and return the value of the last, versus languages for which functions are a list of statements executed sequentially, the last of which can optionally return a value. A lot of other issues are ultimately due to how that choice directs default behavior. Even when a language has the amenities to borrow the other's tools, the defaults channel common usage.

Likewise: Relational databases are ultimately based on the relational model, even when parts of SQL fill in conceptually irrelevant details for sake of performance; Prolog is based on backtracking and Horn clauses, and well-structured Prolog can be reasoned about in terms of them, even when cuts are used to avoid wandering about needlessly in known dead ends. The ultimate result would (eventually) be the same. Both models are fundamentally different from pure functional or imperative programming, as well, and the languages built on them are as different as Erlang is from Smalltalk.

I'm self-taught and not a mathematician, so I couldn't give a really exhaustive or formal analysis of the distinction -- probably the best way to understand it (and learn quite a bit else, PS) is to work through the exercises in SICP and CTM. The second chapter of SICP includes Church encoding (http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-14.html...), among many other things. SICP covers functional programming quite well, but CTM makes the fundamental differences between the different programming models and their strengths and weaknesses clearer.

You can use functional idioms in imperative languages, but they may lack features such as tail call optimization or garbage collection, which mean that you're working at cross-purposes to both the semantics of the language and the cases for which its implementation has been optimized. Whether this means attempting to do so is actually infeasible (e.g. stack overflows in Python) or just slow and/or verbose depends on the language. Likewise, modeling problems in functional languages that are intrinsically better-suited to imperative languages can be messy and confusing.

(OO is a completely different direction, but as people have wildly inconsistent ideas of what it actually refers to, I'm not going there.)


Functional languages are defined using the Lambda calculus. Imperative languages are defined as a model of the Turing machine.

The implementation details, such as the in-memory format of integers, are irrelevant to the language.


That was really well-written. I liked this line:

> The Lambda Calculus is innocent of the notion of mutability


A student once asked his teacher "is Scala a functional programming language?" The teacher gave a shout and began hitting the student mercilessly with a chair, until he was forced to flee the room.

The student was enlightened, and wrote his program in Scala.


This strongly reminds me of listening to the first couple of lectures of SICP.


Didn't know clojure tries to do OO


I'm going to guess he meant clisp, clojure has a page specifically stating it doesn't do OO.

(though of course with tuples and multimethods...)


Clojure has a page specifically stating it doesn't take the traditional object-oriented approach - the Java/C++/C# one people are most familiar with: http://clojure.org/multimethods


It doesn't. Objects imply state and a set of methods to mutate that state, which is incompatible with Clojure's immutable-by-default design. Instead it provides tools that facilitate the same things as OO features: polymorphism via multimethods and inheritance via ad hoc hierarchies on names.


Aside from all the Java crap it has multimethods, but it doesn't really make sense to call them OO because there's no object that they're attached to.


OO is not about methods, it's about the data. Single-dispatch OO is just a special case of multimethod-based-OO, anyway.

(Remember the whole "sending a message to an object" thing? You do that with both variants. In the single-dispatch case, the method is a structure in the metaclass instance. In the multiple-dispatch case, some other helper object finds the right method. Same idea, though.)


Multimethods are ambient actions. You could implement them with functions and any sort of type system. Maybe also a way to have variable-length parameter lists. If that's OO then functional programming is object-oriented programming.


Yes, functional programming is very similar to OO.

OO:

    class SomeType { 
        <data> ; 
        SomeOtherType function1() { 
            this.whatever ; 
            ... 
        }
    }
FP:

    data SomeType ...
    f1 :: SomeType -> SomeOtherType
    f1 this = whatever this >> ...
Same idea, different syntax. (Type classes will allow polymorphism, if that's what you're after.)

You are thinking of the abstraction in terms of its implementation. There is no reason why a method in a class would physically be in that class; the class just acts as a namespace:

   class Foo { method }
   Foo.method();
and

   (defmethod Foo.method ...)
   (Foo.method)
are the same.


Multimethods means that a method is dispatched not only on basis of the receveiving object (or read: first argument) but on basis of all arguments. The term is inherently related to OOP: http://en.wikipedia.org/wiki/Multimethod


You put it much better than I did; facts without rhetoric :-)


Do you realize single-dispatching methods in class-based OOPLs are nothing but constrained generic functions? they're curried on one argument and nailed flat to the ground with a staple gun.

I don't know about Clojure, but Common Lisp's "multimethods" are OOP on steroids; you could layout your code vertically in a parent/child class heirarchy and it would look no different than a "normal" class-based inheritance tree; all you have to do is visualize the method call in prefix mode:

  object.method (arguments) = method (object arguments)
In essence, you could model Java and similar languages with a fixed and constrained Common Lisp. You just have to take its wheels off and lay it flat on bricks.

Interested parties can go and learn CLOS for the full story. Spoiler, though: not long after, Lisp will start to hover! you just can't hold somethings down. If you thought of objects in terms of a vertical class heirarchy, or a side-ways call chain or what have you; CLOS will make your visualization planes intersect and show you new dimensions you never imagined..


It's still OO, it's just not the style most people are familiar with. One can implement multimethhods in Java and C++, it's just cumbersome. (In fact, Stroustrup has said he wanted C++ to be able to do multiple dispatch at the language level, but he never found an expression and implementation that satisfied him in terms of performance and the type system.)

CLOS has had multimethods for years, and that is indeed OO.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: