I found this post to be a poor attempt at shoehorning the notional of functional languages into lambda calculus. Sure, functional programming is inspired by lambda calculus, but it doesn't reduce to it, and, what's important, never has - from the very first LISPs on.
So, for example, Church's encoding of naturals is introduced, which is totally irrelevant to all practical functional languages. Despite what the author is saying, using computationally sane integers is not "nothing more than an optimization", but a fundamental part of the language (and don't get me started on floats). It's just that actual functional programming languages model a lambda calculus with atoms - basic objects, like integers, that are not functions. It's really OK - not everything needs to be a function! The fact that it's possible, in pure lambda calculus without atoms, to model integers using Church's trick is cool and has theoretical significance - but no relevance at all to functional languages.
Similarly, functional programming has always included languages with mutable data. LISP was mutable from day 1, and so was the even more functionally-inclined Scheme. There's nothing in "functional programming", as has been practiced for decades, that necessarily excludes mutable data. Now, "pure lambda calculus" is another story... but it really is another story, one that serves as a motivation for functional programming rather than all of its being.
The requirement that a functional language, in the author's opinion, must have tail call optimization, is weird (many LISPs don't have that), and its explanation is weirder yet: a functional language must have TCO because it's based on lambda calculus where everything is done with recursion, so you need infinite stacks (???), but real stacks aren't infinite, so "you need tail call optimization to allow the use of recursion for looping". Well, what if you don't use recursion for looping, just as you don't use Church's encoding for integers, because you are not after all writing in lambda calculus?
Overall, I felt that the author let his intuition of what a functional language could be settle too firmly on Haskell (with the understanding that static typing is optional), and is trying to redefine functional programming in accordance with this vision.
Lisp was by all accounts an attempt to make a working lambda calculus[1]. However the machines weren't very quick then and speed was a real issue. Even later on when Scheme was designed, speed was an important consideration, and it probably did affect the resulting language.
What the author was saying about TCO was that, based on the fact that all our machines today use stacks, and not wanting to have programs that break(!), any implementations of lambda calculus on these machines must have TCO to allow as much of the lambda calculus as possible to be available in the programming language, lest you reject otherwise correct programs.
But make no mistake, Lisp[2] and functional programming languages ARE based on the lambda calculus. The article really did hit the nail on the head.
--
[1] "Lisp was originally created as a practical mathematical notation for computer programs, based on Alonzo Church's lambda calculus."
And quite relevant to this topic, our compilers and computers have (since the invention of Lisp and Scheme) gotten hugely better to the point where we can make much closer approximations to the lambda calculus such as the inclusion of lazy evaluation and immutability.
I'm not sure I agree with this first statement. For sure scheme was motivated by lambda calculus, but I do not believe McCarthy (a functional analyst by training) was aware of lambda calculus when he invented Lisp. Rather he was motivated by symbolic calculations often performed in functional analysis.
NOTE: I stand corrected, I just checked and McCarthy's Lisp paper does cite Church's thesis work. Somehow I had thought Algol was motivated by lambda calculus
No, it was an attempt to implement the Actor model in Lisp. The Actor model isn't what people usually have in mind when someone says OOP - it's more like Erlang or Termite.
Sorry to keep replying like this, but I'm just utterly dismayed by your upvotes. To put it bluntly, you are wrong (particularly paragraph 1 sentence 1 and paragraph 3, mutability).
I'm not sure how you have arrived at Lisp being your ad hoc definition of functional programming, but it frightens me.
The main issue here is that the article isn't saying that only the lambda calculus is functional programming but it is saying that the whole paradigm is based on the lambda calculus and so to determine how functional a language is, you determine how similar it is to the lambda calculus.
Wikipedia on functional programming:
"Functional programming has its roots in the lambda calculus, a formal system developed in the 1930s to investigate function definition, function application, and recursion. Many functional programming languages can be viewed as embellishments to the lambda calculus."
Feel free to reply as many times as you'd like; however, you're a bit confused. You haven't read my comment attentively - Lisp is not my ad hoc definition of functional programming, and nothing in my words indicates that. I just used Lisp a few times as an example to show that the original post's idea of what a functional language is is too narrow and unreasonable.
You're saying in one paragraph that the paradigm of functional programming is "based on" the lambda calculus, and to support that you're citing wikipedia as saying it "has its roots" in the lambda calculus. These are two very different things. You can have roots in something with, and yet also contain other crucial ideas. Functional languages are like that: their basic syntax is inspired by the lambda calculus to a significant degree, but they don't reduce to it.
It's a mistake to assert, as you do so confidently, that how functional the language is depends on how similar it is to the lambda calculus. There is no Council on Functional Languages that ever decreed it to be the case. What "functional programming" and "a functional language" means is relatively messy, as most things in computing, and is determined by how people have been using these words, more or less consistently, over the decades. If you look at that, you'll discover, for example, that while having functions as first-class citizen was always a definite requirement of functional programming, neither immutable state nor TCO ever were.
It seems to me that, speaking generally, many people learn about the lambda calculus and get their minds blown. Lambda calculus is a very beautiful thing, but it did not invent passing functions to functions, returning functions as results, everything if a function, etc. That started four decades before with Russell & Whitehead's theory of types (numbers, first-order functions, second-order functions that act on first-order functions, third-order functions and so on) and was commonplace by Church's time. Church's contribution was a particularly simple and economic system of notation that used just one operator (the lambda) to climb the tower of types. Another ingenious notational idea was to use simple juxtaposition as function application. Functional languages later used these two ideas to great effect, but they never reduced to these two ideas and nothing else. What about conditional execution? Named variables? Lists as a fundamental structure? Static typing? Sure, you can simulate most of these in vanilla lambda calculus, just as you can simulate numbers with Church's encoding. But how is that any different than saying you can simulate loops and subroutine calls on a Turing machine, and therefore these are "just optimizations"?
To sum up: the common underlying theme for functional languages, if you look at them, has been freedom to use functions as values everywhere values are used, freedom to compose them at will, and freedom to easily define new ones. The syntax of working with functions and defining them has usually been inspired by the lambda calculus (though see Dylan for a weird exception). Being as close as possible to the lambda calculus is something that's never been a goal of functional programming as a whole, though recently some languages have gotten surprisingly close to it. Having features incompatible with or orthogonal to the usual understanding of lambda calculus has always been OK. TCO is an issue for language zealots to fight over, not a serious contender for what is or isn't functional.
It's a mistake to assert, as you do so confidently, that how functional the language is depends on how similar it is to the lambda calculus.
The article agrees with this assertion, Wikipedia agrees with this assertion and I agree with this assertion. Why is it a mistake to assert this? Granted you point to earlier type theory which I was unaware of, but my claim wasn't that lambda calculus was original.
If you look at that, you'll discover, for example, that while having functions as first-class citizen was always a definite requirement of functional programming, neither immutable state nor TCO ever were.
Here is another big gripe of mine. Given that there are practical reasons for the earlier functional languages to have been different from the lambda calculus (speed and memory), why continue using those languages as metrics later on when the limitations of speed and memory no longer apply?
Modern functional programming languages are no more similar to lambda calculus as modern imperative languages are similar to Turing machines.
In lambda calculus, functions are declared and manipulated. Data can be encoded as a function and vice versa. Sound like any functional languages you know (cough Lisp cough)?
In Turing machines, cells on the tape (or variables) are marked (or declared) and manipulated. But try shoehorning OOP into this model. It's a model. It's may not capture all the details.
It's like how modeling the crash safety of a car is similar to idealized theoretical physics. Sure, safety systems differ widely across cars, but ultimately, we have models that put limits on how likely a person is to survive a crash. The model isn't perfect, but it captures the essence of the differences of the two languages.
I think your comment would be perfectly reasonable if you were trying to argue against a rigid definition of a _Functional Language_. But I think it's off base to say 'There's nothing in "functional programming", as has been practiced for decades, that necessarily excludes mutable data'.
Functional programming DOES NOT PERMIT mutability or side-effects. Lisps such as Scheme may lend themselves well to a functional style, but mutability is not an element of functional programming. I could write in a functional style in C as well, by AVOIDING mutability.
If we're just splitting hairs about "how functional" a language has to be to be considered a "Functional Programming Language" then you're welcome to label as you like. But if you are suggesting that looping without recursion (i.e. with mutable variables) is in any way Functional, then you are just wrong.
Functional programming DOES NOT PERMIT mutability or side-effects.
I think the word you don't realize is missing there is "Pure".
Pure functional programming, and pure functional style, is everything you think functional programming and functional style are. Purity is precisely the technical requirement that constrains a function to not have side-effects and to not mutate data. If, as you say, functional programming did not permit mutability, the term "pure functional programming" would have been meaningless.
But functional programming is not, and has never been, solely pure functional programming; and despite the positive connotations that the word "pure" commonly evokes, functional programming with mutable data is not some kind of inferior version of pure functional programming, a poor distant relative fresh from the provinces come to beg favors from a wealthy merchant. No. This belief commonly stems from ignorance of the history of functional languages coupled with wide-eyed admiration of an especially elegant pure functional idiom recently learned.
Some people believe that the purer the better (this belief has only gained popularity relatively recently, in fact), others think that mutable state is just fine where it helps rather than hinders. Both kinds of people happily write functional programs using functional programming languages.
So, for example, Church's encoding of naturals is introduced, which is totally irrelevant to all practical functional languages. Despite what the author is saying, using computationally sane integers is not "nothing more than an optimization", but a fundamental part of the language (and don't get me started on floats). It's just that actual functional programming languages model a lambda calculus with atoms - basic objects, like integers, that are not functions. It's really OK - not everything needs to be a function! The fact that it's possible, in pure lambda calculus without atoms, to model integers using Church's trick is cool and has theoretical significance - but no relevance at all to functional languages.
Similarly, functional programming has always included languages with mutable data. LISP was mutable from day 1, and so was the even more functionally-inclined Scheme. There's nothing in "functional programming", as has been practiced for decades, that necessarily excludes mutable data. Now, "pure lambda calculus" is another story... but it really is another story, one that serves as a motivation for functional programming rather than all of its being.
The requirement that a functional language, in the author's opinion, must have tail call optimization, is weird (many LISPs don't have that), and its explanation is weirder yet: a functional language must have TCO because it's based on lambda calculus where everything is done with recursion, so you need infinite stacks (???), but real stacks aren't infinite, so "you need tail call optimization to allow the use of recursion for looping". Well, what if you don't use recursion for looping, just as you don't use Church's encoding for integers, because you are not after all writing in lambda calculus?
Overall, I felt that the author let his intuition of what a functional language could be settle too firmly on Haskell (with the understanding that static typing is optional), and is trying to redefine functional programming in accordance with this vision.