I think that "Mostly functional" is actually the sweet spot.
Going to any extreme makes some things horribly difficult and going to another does the same for other things. So, optimally, multiple paradigms coexist in the single codebase, applied where they're most useful.
Functional programming with as many immutable bits as possible is definitely a good start. I generally do that for whatever problem I'm solving: I have a model for the data and then I write (if at all possible) pure functions to transform the inputs into meaningful outputs. But then you need know where to drop the ball and move over to some other paradigm that does something else right and merely drives the functional parts from the top level.
For example, such a data analysis library can be written with minimal state and using only pure functions but if -- and when -- you need some sort of an user interface so that the program can actually be used, an imperative/procedural approach is generally the most native approach because UIs are basically I/O. If you're adding a graphical user interface, you might use object oriented approach to build the UI tree which is probably the world's most idiomatic, canonical use for OO anyway. But even those are generally driven with an innately imperative event loop.
Also, note that the different approaches or paradigms aren't language specific either.
In the first stage, languages are tools that shape your thinking into accepting new programming paradigms but at some point you have a number of different ways of thinking in your head, and you can just forget about the languages they came from.
But in the second stage, you can just think directly in paradigms: you can consider different ways to build different parts of your program but you might actually use only one language to implement everything. You can write functional, imperative, object-oriented, and whatever code in C. Or you can use several languages with strengths in each paradigm, depending on what trade-offs produce the best engineering in each case.
Please note that this is pretty much how you model Haskell programs as well: Keep as much of your logic as possible in pure code and interface/drive that with imperatively written stateful code. Purely functional languages (e.g. Haskell) do not remove your ability to code imperatively, they rather augment it so that you can better reason with it while doing it. Things like first class IO actions (that you can pass around) and explicitly marked state (i.e it's clear what you have in context and what you don't) make for some pretty satisfying solutions you wow yourself with.
It is common to hear in the Haskell community remarks like "Haskell is the best imperative language I've used."
I agree whole heartedly, people write some pretty impressive shared state concurrent programs in haskell, precisely because they know how much state is shared.
I agree. Part of the reason Clojure sees production use and Haskell does not leave its academic closet very often is that the former makes interaction with the non-functional parts of a system painless.
I don't regularly use either one, but I was under the impression that the situation is rather the reverse: Clojure is quite popular among hobbyists but doesn't get a lot of serious industry usage (perhaps excepting some stuff in the web space, which I don't really follow), while Haskell has a bunch of industry users. I see ads for Haskell jobs pretty often, anyway, while I'm not sure I've ever seen a Clojure job ad outside of HN threads.
My experience has been in the reverse, which just goes to show you how bad anecdotal evidence is at proving any point. Without any hard data, it's impossible to say for certain. All I know is that there is definitely enough demand for Clojure work that I didn't have to search for any leads when I left my last job; I already had four promising prospects to choose from that either emailed me directly or messaged me on Linkedin within the previous month.
I believe Haskell has deeper industry adoption than Clojure does right now. You just don't see it a lot because haskell people aren't as focused on evangelism.
I agree with you in general about 'mostly functional' being a sweet spot, but I think there's also lots of room for pure functions and immutability in ui programming without bending over backward too much.
For example, rather than creating a big oo hierarchy to model a ui, you can describe it declaratively as immutable data, then transform it with chains of pure and semi-pure functions whose only ultimate side effects are updating whatever bits of state absolutely must be held onto (ideally not much) and rendering the ui. I think that qualifies as 'mostly functional' even though it's not totally pure and doesn't use monads or other advanced constructs.
I haven't yet toyed with react.js, but I believe it takes an approach similar to this.
> So, optimally, multiple paradigms coexist in the single codebase, applied where they're most useful.
If you count a pure (e.g. monadic) encoding of another paradigm in this, then you're not disagreeing with Erik. If you don't, then I think you're wrong about where the sweet spot is.
Debugging a functional program is so difficult (data flow debuggers don't really exist) that equational reasoning is necessary because you want be able to fix your code otherwise. But really, mixing list comprehensions with effects is really a bad idea, and we C# programmers have no trouble avoiding it.
There are ways to tame side effects without going monads, which don't really fix the complexity problem anyways (it just makes all effects explicit). See this paper for ideas on how to do that:
This is the first time I see this Sean (did you post to LtU?) and it's good to see -- based on a cursory glance just now -- one's private research validated by academia! :)
I'm taking a cognitive processing model approach [with primitives of] data ("facts") and references ("values") and the memory model to handle and relate them takes the primary stage. This would allow for existing languages to take advantage of the temporal memory model (which is content addressable, btw) and also permit a native language with first class support for 'POV' semantics to also use it.
I'm calling this approach to building information systems "Optimistic Realism".
I want to make it real; I'm working on some interesting ways of improving performance right now. It is kind of a climb though, I have no idea when/if this would ever reach production, but it's almost a brand new world if we can make this work.
Do you have any plans to release the code you have so far? Seems it'd be a fairly big effort to catch up with where you got to so far, especially the live editor.
Actually, the really nice thing is that the live editor is easy to build on top of the programming model! Once the pieces work a bit better, I'll try to put this on codeplex (though WPF/windows only...).
I post to lambda the ultimate [1] often; if you check there occasionally, you can probably get a good idea of what I'm doing. Hopefully when I get this working, it will be a big enough deal that it will be easy enough to publicize :)
Are there any practical systems that are usable today that implement "managed time"? (I haven't read the paper yet(!), but I just thought I'd ask to shorten the turnaround time.)
Btw, are you familiar with David Barbour's Reactive Demand Programming and if so, what are your thoughts on it?
Glitch is an approximation to Backus's "Applicative State Transition Systems".
see:Backus:"Can programming be liberated from the von Neumann style?"
www.thocp.net/biographies/papers/backus_turingaward_lecture.pdf
Since 1980, I have applied ASTS in the development of Hard Real Time Avionics Systems Software for Military & Commercial Aircraft and Spacecraft.
I have licensed this code exclusively to Aerospace companies over the years.
It has proven its value in the development of verifiable software.
For reasons that Backus states, the approach is not easy to comprehend nor apply, and requires very specialized tools (data flow debugger & proof system).
The tool is known as "Synthesis" in the Aerospace Industry.
Not really. What Backus is advocating is applicative-style programming (what we know as FP today); what Glitch is advocating is anything but! Some of the code looks similar (indeed, we are inspired by FRP and earlier reactive languages like Esterel), but Glitch keeps everything in the world of explicit control flow rather than bury everything in data flow.
I disagree with some of the sentiment expressed in the article; mostly functional programming works much better than profoundly non-functional code, and more functional programming usually delivers marginal benefits.
The fact that effects can be used to simulate other effects does not imply that programmers would typically try doing that. In fact, programming style is more often shaped by trivial inconveniences (often syntactic, or the availability of certain libraries) than most of us care to admit. A programming language that merely discourages people from writing spaghetti code should still cause less spaghetti code being written. Freak bugs will pop up from time to time, but not as often.
Shifting an existing language to a more functional style should be a net negative thing only if the resulting extra complexity and bloat gets excessive. Starting from an OOP/imperative position, it gets harder and harder to support an additional (marginal) piece of FP functionality, and after a while it's just better to start over and switch to pure FP, but current OOP languages flirting with FP are not particularly close to that point. They haven't started in earnest to introduce purity to an impure core language (which would be rather awkward, as the article points out). In this situation it is somewhat early to talk about mostly functional programming not working.
Scala might be closest to the point of excessive bloat, but I'm not really familiar with it, and I think that the FP support there is a good thing, especially in contrast to the Java code that it might displace.
(Disclaimer: I program chiefly in Haskell and I love it).
And it's important to note with Scala, a big part of its complexity comes from the practical philosophy of its creators: purity is sacrificed in order to actually make it work on the JVM the way we want it to.
I used to work with Java on the server-side, but I'm programming almost entirely in Scala now (I'm at a small shop where I was lucky enough to convince the boss to let me give it a go on a project last year) and I've got to say that it's completely changed the way I think about and solve problems. I learned Haskell in university and I'm trying to learn more, but I don't see it ever being accepted in our office.
The Scala syntax does (especially when working with async programming / Futures) suffer from problems, like the nested callback problem that Javascript also has, but there is an elegant solution in the language... you just have to know how to use it. But on the other hand, it doesn't look like a completely foreign language to Java developers and it's not too hard to get our new hires productive with it.
How do you manage nesting from callbacks and matches and such? Inlining short functions like _ + _ is fine, but how do you organize more advanced operations?
(I'm just constantly looking for ways to make my scala code more accessible.)
I am indeed referring to for-comprehensions. Any time you have nested flatMap/maps you can replace it with a for, since that's all a for-comprehension really is...
Or, since I'm using the async Postgres module which returns Futures, to make them run in parallel you need to create the futures beforehand. I often have something like this...
val fProject = Project.findById(projectId)
val fTask = Task.findById(taskId)
for {
projectOption <- fProject
taskOption <- fTask
students <- projectOption match {
case Some(project) => Student.findByProject(projectId)
case None => Future.successful(IndexedSeq[Student]())
}
result <- (projectOption, taskOption) match {
case (Some(project), Some(task)) => {
/* do something with project, task and students */
}
case (None, _) => Future.successful(NotFound(s"Project $projectId not found"))
case (_, None) => Future.successful(NotFound(s"Task $taskId not found"))
}
}
yield result
By creating the futures first, they both run in parallel and their results are 'collected' by the for comprehension. In the first example, the computations necessarily run in sequence.
I'm using Play framework, for me, these for comprehensions are usually found in my controllers and the final result is an HTTP result.
Passing along failure can still be tricky but I find this much more organized than nesting callbacks.
It's an obviously silly exaggeration: what, did MacLISP, Scheme or Standard ML not exist? It's not as if Erik Meijer has the excuse of never having heard of them...
The software engineering world is in danger of repeating the mistake it made with objects two decades ago.
Back then there were legacy "structured" languages like C and Ada, and new exciting "object oriented" languages like Smalltalk and Eiffel. C++ was promoted as a "middle way" that let you "choose the best tool for the job". This made the pure OO languages look extremist. So, it was argued, if you had a problem best solved by structured programming you could do that, and if you were doing an application with objects in it then you could use those. It also meant that your old C programmers could pick up the tool and start using it immediately without having to relearn how to design a program.
"Aversion to Extremes" is a well-known cognitive bias, and these arguments play up to it, but of course it didn't work well in practice. OO features didn't dovetail neatly with the existing structured features, leading to an exponential explosion in the rules defining how the various features interacted. The mess was not helped by experienced structured programmers who felt they should use the new sexy OO features; the result was often a conventional structured design with some random virtual functions sprinkled around.
Today we have the same story happening again. On one hand we have legacy OO languages like Java and C++, and on the other hand we have pure functional languages like Scheme and Haskell. So along come "hybrid functional" languages like Scala which basically make the same promise as C++: if your problem has lots of objects then you can carry on doing the same OO designs you know and love, but if you think that these magic first-class functions would be useful in some complicated algorithms then you can use those as well. And its going to fail for the same reasons that C++ failed: the OO and functional features don't interact well, so we are going to have lots of messy rules about them that cause subtle bugs, along with attempts by OO programmers to use chains of map and filter functions that work inefficiently because the compiler can't optimise them. And in ten years time there will be an "Industrial Strength Scala" book consisting of a long list of features that should be avoided if you want reliable software.
"And its going to fail for the same reasons that C++ failed"
If only my successes were as successful as that failure. Back then, if you had a code base in C, migrating to C++ was easier than migrating to Smalltalk or Eiffel.
Similarly, today, migrating a C code base to C++ or and using its functional 'extensions' or starting to use those features in a C++ code base is easier than migrating it to Haskell or Scheme.
C++ is far from perfect, but it beat other contenders because of that feature.
And yes, if, as I expect, C++ remains popular, we will have books describing the pitfalls of functional-style programming (Stroustrup: "There are only two kinds of languages: the ones people complain about and the ones nobody uses"). For example, we will see blog posts lamenting the heavy nesting of map/apply/select chains because the resulting code, in some cases, will run out of instruction cache space, degrading performance.
Of course C++ didn't fail in the sense that it would lack popularity; I think the parent meant that C++ is a complex, horrible mess and that it failed in the "beauty contest" sense.
It also failed in the sense that it didn't eliminate all competition.
Yes, that is what I meant. Thanks for the clarification, although I think the term "beauty contest" trivialises the issue. Its not about beauty, its about buggy software.
Right. But until we have a comparable body of production software written in pure functional code -- and by comparable, I mean closer to the volume of code written in today's mainstream languages and by average developers, not a few applications written by elite developers -- we can't say for sure whether the pure functional approach does in fact prevent buggy software, or whether it simply leads to different kinds of bugs.
A lot of advocacy for functional programming and languages like Haskell makes claims about being safer because certain types of programmer error are prevented. However, relatively little commentary also highlights downsides like the need for accumulating parameters and strictness annotations just to achieve acceptable performance, or the maintenance hazards of having both monadic and non-monadic implementations of algorithms. Until someone finds a way to prove that a large-scale lazy functional program can't exceed the available system resources with an explosion of thunks, any claim that this approach is inherently safe seems rather premature.
So, library code and critical infrastructure would fall outside of the "vast majority of cases". :)
And again, the question there is "Did Heartbleed keep people from getting value out of OpenSSL?". The answer is no, even if the value provided wasn't as advertised. It was more useful to have something--anything!--than nothing.
'So, library code and critical infrastructure would fall outside of the "vast majority of cases".'
Arguably, though the limited sandboxes in wide deployment are sufficiently full of holes that most code provides some access to something that might be considered critical infrastructure.
'And again, the question there is "Did Heartbleed keep people from getting value out of OpenSSL?". The answer is no, even if the value provided wasn't as advertised. It was more useful to have something--anything!--than nothing.'
Possibly. Belief that you're secure when you aren't can quite definitely be much worse than no belief that you're secure, but I'm significantly less sure that Heartbleed moved many people from "sufficiently secure" to "insufficiently secure" in many individual cases (the possibility exists, I just lack the data to make any determination).
Also, for many use cases, there exist (and existed then, of course) alternatives that are probably more secure. If "nothing" meant using those instead it was quite a bit better to have "nothing".
> And its going to fail for the same reasons that C++ failed
C++ "failed"?
> the OO and functional features don't interact well
How don't they? Before C++ had lambdas, programmers would define a class with overloaded operator() and use that instead, every time. Now the language provides a way to easily generate the class with its members and constructor and operator() automatically - which is basically what functional languages turn lambdas into. You could make the same argument that "procedural and OO features don't interact well", but C programmers were (and still are) using structures and function pointers to accomplish much the same effects.
If you have a mutable object anywhere in your codebase, the compiler can not reuse calculated values, ignore unneeded code, or map a set of independent output sequences[1] at your source code into highly interleaved single threaded asynchronous output, like Haskell's green threads do.
[1] I'm yet to see an imperative language (and "mostly functional" is imperative) that has the concept of "output sequence", instead of "execution sequence", but I don't know enough to claim it's impossible.
Thats interesting. I wonder if anyone has done a performance study to measure and compare the speed up offered by the functional approach for some standard workload.
Well, it's hard to define "some standard workload", and obviously you can always write something in C that performs at least as well as any program in Haskell. What's important is comparing how hard it's to write and maintain both programs.
For instance, in Haskell a value is a value. If I have a value of type Integer then it definitely exists; if I want to say that it might not exist then I use the type "Maybe Integer", which expresses that idea precisely.
Same goes for a value of type Employee; if I might or might not have an Employee (for instance, if the lookup function doesn't find someone with that employee number) then I have to use Maybe Employee.
Scala has the same concept with (IIRC) the Option type. But Scala also inherits null references from Java. In Java a reference to an Employee might be an optional value (so null is allowed) or it might be a required value (so null is not allowed). Scala has to play well with Java so anything has to be allowed to be a null reference. Except that Integers in Java (and hence Scala) aren't references, so I can't have a null reference to an Integer.
So now my "Option Employee" might wind up being a null reference to the Option, or it might be an Option that is empty, or it might be an Option that contains a null reference to the Employee, or it might actually have an Employee in it.
The same is true of C++. Values and references in C++ can't be null. Pointers can be null, but that's a lower-level feature that doesn't have an equivalent in idiomatic Haskell.
"Ptr a" is an equivalent, in many senses. It's quite true that use is far less in idiomatic Haskell, especially as you're restricted to IO when operating on them.
Ptr is exactly the same thing as a C pointer. Critically, if you have a Ptr to some value, that Ptr could become invalid if the value gets moved by the garbage collector, just like a C pointer would. That's why you'd never use Ptr except in FFI code, to access memory that isn't managed by Haskell's GC. (If you need to pass a pointer to a Haskell value to code written in another language, you'd use a StablePtr, which pins the value in the Haskell heap.)
In idiomatic Haskell, if you need something like a pointer you have several choices. If you're writing parallel code your best bet is STM. If you're writing code in IO, you can use an IORef. If you don't want that restriction, you can use an STRef.
(I'm aware that you probably know all this, but other people reading the thread might not.)
It's worth noting that `Option(null)` is `None`. And while it's possible for any reference to be assigned `null`, it rarely happens in Scala code as a) it's well known to be bad form, and b) the IDE's can be configured to yell at you about it. Practically speaking (from the middle), it's not a problem.
That's an important point, but since you can't test for undefined-ness it's not quite the same thing as carrying around something that's actually a Maybe Int but just hopes you'll check before implicitly using fromJust everywhere.
And just what makes you think making a purely functional language 'lambdacious' will not a few years later result in a book 'Industrial strength lambdacious'?
You're falling for the trap of thinking that pursuing a theoretically pure discipline will result in a language with less flaws; in reality, even theoretically pure concepts have issues and can be superior and inferior to other theoretical concepts/constructs [which may or may not be invented later]. Pursuing one theoretical framework may make some issues either disappear or be invisible (like garbage collection makes the issue of memory management appear to disappear -- but it still has to be done at some level, and this causes new issues, as evidenced by many talks given and books written about how to deal with $LANGUAGEs garbage-collector) but that doesn't mean that you won't have issues remaining or even create new issues.
You call "Aversion to Extremes" a cognitive bias that doesn't work well in practice, but you should realize that the opposite is true; the more extreme a language pursues a theoretical concepts, the less used it typically ends up being in practice. For every success-story you can give me about Agda (pursues dependent typing), haskell (pursues functional purity) or smalltalk (pursues object-orientedness) I can give you a thousand success-stories of people using C++, java or C#.
I can't imagine that you will be able to come up with any measure of "how well something works in practice" that is remotely sensible and makes these "purity-oriented" languages appear to "work better in practice". The reality of programming is just that there are many different requirements one might have to a language and its implementation, and properties that are very clear advantages in a theoretical framework seldomly translate to practical benefits in a nontrivial way.
A nice example (IMO) of a language that pursues functional programming without attempting to be needlessly pure about it is erlang; each "process" itself is written in a functional language that promotes separation of side-effects and communication, but from a "further up" perspective, each process is like an object, holding onto state. That way it gains the [for erlang] important advantages of having functional traits, while not sacrificing flexibility through needlessly strictly adhering to the functional paradigm.
The problem with Erlang is that it solves the wrong problem, so to speak. Reasoning about state locally (inside a procedure/function) isn't all that hard -- which is why intra-procedure/function immutability doesn't actually get you very far. The trick in, e.g. Haskell, is that you can enforce inter-function immutability. In the end all actor-based systems end up being a huge mess of distributed/shared mutable state -- which is what we're trying to get away from. (I'm well aware that there are formalisms that can help you deal with some of this complexity, but they are a) not part of the language, and b) not practiced very widely in my experience.)
Haskel doesn't address problems of distributed computing at the language level. For distributed computing you need message passing, you need to handle failures. If you do distributed computing in Haskel, you also need to build and use actor-based abstractions. It is not possible to hide distributed computation behind an immutable function call abstraction (RPC systems tried to do it and failed).
Indeed not, but that's not quite the point I was trying to make. My point was more that preemptively programming as if every single little piece of state is potentially distributed state is actually detrimental. Distributed mutable state is hard and there's no way around that other than changing the model to e.g. a reactive one -- local mutable state shouldn't be hard.
Quote: "You call 'Aversion to Extremes' a cognitive bias [... but] the more extreme a language pursues a theoretical concepts, the less used it typically ends up being in practice."
Yes, people use languages with solid theoretical foundations less because they perceive them to be extreme. That was my point.
Quote: "I can't imagine [... any measure that] makes these 'purity-oriented' languages appear to "work better in practice".
Quote: "each process is like an object, holding onto state".
This is the actor model. Erlang is an uncompromisingly pure implementation of the actor model, which is why it is so effective. Again, I think you are making my case for me.
Your use of the term "sacrificing flexibility" sets up a false dichotomy between purity and flexibility, as though there were programs that are difficult to write in Haskell.
They use them less because they are less practical and due to their purity are unable to fulfill peoples (real, rather than theoretical) requirements.
Measuring LOC required to implement a task is meaningless. If you implement a task in python in fewer LOCs than what I need in C, that doesn't necessarily mean anything, because my code might fit on an 8-bit microcontroller, run faster, ... or meet any other number of imaginary or real additional requirements I might think of (and people always have lots of those!) "Have 10% fewer lines of code" is pretty much never a given requirement, though.
Sure, erlang is a "uncompromisingly pure implementation of the actor model", but then python is an uncompromisingly pure implementation of the python model, and C is an uncompromisingly pure implementation of the C model, so that's hardly an interesting argument. Erlang is pretty much what popularized the actor model as we consider it nowadays in mainstream programming, so that's tautological.
The point (which you have not actually argued against) is that erlang is also a functional language, but not a pure one; it allows side-effects, it allows sending messages, it allows storing data in your (destructibly updatable) process dictionary, et cetera.
That means erlang benefits from the features it inherits from the functional languages, even though it doesn't go "full functional" in the sense some other languages do.
If you don't think there are programs that are difficult to write in haskell, agda, miranda, ML, ..., I have to doubt that you have done a substantial amount of them, and/or that you have applied it to any actual real-world task.
Erlang is an uncompromisingly pure implementation of the actor model
It's not. To pick just the first issue I remember, PIDs are forgeable. (I spent a day once trying to survey what you'd have to do to capability-tame Erlang, and it looked like a lot of work. The Erlang developers did not wear a hair-shirt.)
Not that I necessarily agree/disagree with the general thrust of your comment, but...
Scheme is not a pure functional language, it's pretty far from it (see all the "something!" functions). Heck, it could even be argued that Haskell isn't given the existence of unsafePerformIO, though most people think of it as such. (Since unsafeXXX functions are generally vehemently discouraged in the Haskell community it's not really a problem that crops up in practice.)
I think OP wittily conflates 2 senses of pure. Scheme is purely functional in the sense that it's _only_ functional, as opposed to a hybrid like Scala, as OP points out.
But it's not purely functional in the sense of Haskell, that famous flagship of effect-less programming. Here 'pure' is contrasted against _effectful_ functions.
In Scheme, everything is an expression and because of macros and mutability not all expressions are functions in any sense beyond that by which we commonly use 'function' as a synonym for 'programming procedure'. What Scheme doesn't have is a symantics that distinguishes between statements and expressions.
Excellent summary. Yes, I did indeed conflate these two meanings of "pure" because I didn't want to get my post side-tracked into a discussion of this point.
Unfortunately if you want a language with industrial strength tools and libraries that is pure in the effect-less sense, Haskell is the only choice. So when I try to talk about the issue of effect management it sounds like I'm merely plugging my favourite language instead of making an argument about the importance of a fundamental property.
How about Clojure, which is a Lisp-like language with objects? Explicitly, you have protocols and multimethods; implicitly, almost everything under the hood is done with Java interfaces (and you can make new first-class datatypes by implementing the interfaces, though this is mildly discouraged). I cannot recall anyone complaining about the OO clashing with the functional patterns.
There's also O'Haskell, and in regular Haskell you can get something like polymorphism (implemented under the hood with actual polymorphism) with forall-qualified datatypes.
It's not OO and functions that class... it's functions and state. There's (literally) no law of computer science that says objects and functions must clash.
I'm an experienced Clojure user with work done on the job and in open source. If you're a Clojure user, it's very likely you've used a library I've worked on or made.
Don't bother. Go straight to Haskell and just Haskell.
No excuses, no compromises, no mental backflips to justify not learning something new. Learn Haskell properly and then see for yourself why "hybrids" are a waste of time.
Hybridized approaches are like asking for a hole in your bathroom floor when you're being offered indoor plumbing with a porcelain toilet instead of an outhouse.
I find your rant slightly insulting, off-topic, and lacks substantial information. "Learn Haskell because I say so." Why? No.
But let me do something you didn't do, which is to actually explain my position. Hybrid languages afford a flexibility and power not available to language puritans. It's very nice to have things first-class DSLs that don't rely on slow monad stacks, heterogeneous lists that don't rely on existential qualification, stateful programming that isn't a type system hack (do I really need to understand existential uninstantiated types to twiddle a bit in a vector?), first-class side-effects without monad stack weirdness (unsafe escapes don't count), Turing-complete macros at load time, inheritance and class qualification that isn't crippling and/or dependent on weird compiler extensions... In the course of learning Haskell I find myself banging my head against a type-system wall to do something that would be trivial in Clojure or Scala.
Now I like Haskell, but Clojure and even Scala offer very flexible and powerful defaults together with programming escape hatches anywhere you want them. This is very powerful. It's not clear how to have a similar in a strongly typed strict language. Monads and weird compiler extensions don't cut it.
Ah, the Haskell assumption that people who dislike Haskell's way of doing things just don't get it.
Why are you talking to me about Applicatives? Applicatives are easy. I'm not complaining about what the type system can do. I'm complaining about what it can't do. And it can't do a lot. For instance, it can't "safely" twiddle a bit in a vector without resorting to a compiler trick involving uninstantiable existentially typed monads. It can't produce DSLs that live outside the type straitjacket, so you get really complicated stuff, like the kudzu-like system of the Lens library, full of existential types and higher-rank types and restrictions, that still is not actually typesafe. And so on. I'm repeating myself already.
Monads and weird compiler extensions are not the same as a language that respects the programmer who wants to do something unusual. This is what I'm saying. Don't tell me about applicatives as a response. It's not a response, and it's insulting.
You don't appear to be here to share ideas or learn, so lets get concrete rather than posture.
When you say "uninstantiable existentially typed monads" you're showing off your knowledge and being verbose when you actually mean "ST". There's no actual problem here, you're using big words to seem impressive despite the fact that by being obscure you're condescending to them and to me. ST works the way it does so you can't accidentally leak mutable references. There's nothing particularly limiting about it beyond that. ST is like a generic transients that works for any data type instead of the blessed ones.
You don't need to care that ST is existentially typed in order to use it at all. That's like complaining about how difficult it is to weld steel so you live in a mud hut instead. Division of labor applies here because abstractions and types work in Haskell.
There's nothing wrong with twiddling bits in Haskell. I do it all the time. You're complaining that the type system won't let you lie to your users and say a side effect is
a -> ()
-- instead of
a -> IO ()
I happen to think knowing which functions are pure and which are actions is a plus.
"Can't produce DSLs that live outside the type straitjacket" What? You can build a DSL on hash-maps and symbols in Haskell the same as Clojure users do. Haskell users don't do that because it sucks. ADTs and free monads are nicer.
Name-dropping lens as if it's a negative to Haskell is nuts. You're not obligated to use lens if you're still learning Haskell, but finding it difficult to use is a sign you miscalculated your knowledge.
The foundational building block of writing one's first lens is scrubbing out the (fmap . fmap) pattern, which is a use-case even Clojurians can relate to. There's no "there" there to complain about.
Your library
'clearley' would be easier to understand if it had types.
Have you considered a book like Joy of Clojure? http://www.manning.com/fogus2/ I found the first edition of Joy of Clojure improved and clarified my Clojure. Also, back off the defprotocol/defrecord in Clojure until your design is cleaner and more firmed up.
You're not here to be learn or share anything so I'm backing out of the conversation. Provided some clarity on the obtusity before parting.
tl;dr nonsense, intentionally obscure appeals to things that are "too complicated" which are comparable to refusing to drive a car you couldn't build yourself.
You can learn how to build the car later after you understand why it was engineered that way.
I spent a couple of years using Haskell, but a few months after I started using Clojure I've switched over to using it pretty much exclusively.
I certainly didn't find Clojure to be a waste of time; to my mind it's a far more practically-minded language. It's philosophy of decoupling components is also interesting. For example, in Haskell, the static type system is coupled to the language; in Clojure, the static type system is an optional library.
What is your take on the "you have to write clever, complicated code to make haskell work" (1). I've read that a lot of the code use for the language shootout is far from idiomatic haskell, but just plain clever, in order to get decent performance...
>"you have to write clever, complicated code to make haskell work"
Haha, no. I write pretty dumb Haskell myself. The author might've tripped into a library beyond their faculties. Also possible, they might've tried to make something "practical" before they really knew what they were doing. I made this mistake in the past myself.
I even tweeted a sample of my code making fun of how "stupid-simple" my Haskell is :)
I might refine the code later, but for now I'd rather KISS until I'm prepared to firm up the design.
You can get down a rabbit-hole with Haskell if you want, but the same is equally true of Clojure/Scala/C++/Ruby/Python/Perl.
Even then, the rabbit-hole at least has typed hand-rails.
Negatives?
Uhhh, missing libraries (which I am working to help remedy)
You develop a "nose" for libraries for which you aren't the audience. It's not a big or unavoidable negative, but there are definitely libraries on Hackage made to prove a point rather than for use in production. These libraries are easy to identify. Some of them are prefixed acme-*
The primary negative that matters is simply that Haskell is deeply unfamiliar to most working programmers, so while it's 100% worth it, the road to Haskell isn't as well-trodden or smooth as it would be going between Python <---> Ruby.
I'm working to remedy this as well. I've been teaching Haskell for the last ~6 months.
The source for Bloodhound really is clean and simple. I might actually just use the Types.hs file as a my goto reference for the ElasticSearch query format next time I need to write some of those!
The general approach for functional programming is to make some thing right (provably correct), and then transform it into something performant. There are books (pearls of functional programming) that elaborate on this technique. The thing is, the series of transformations retains the correctness of the original implementation.
In reality this isn't necessary very often. It is important to understand persistent data structures, how they differ from imperative ones, and which to use in different situations. However with a good understanding of the fundamentals (just as in imperative code) you'll be writing idiomatic performant functional code fairly easily.
Is your argument that deep experience in Clojure is sufficient in determining FP/OO hybridization isn't a good idea or can't be done well? I didn't think that was in the sphere of Clojure's goals.
I've used a lot of languages, including Clojure and Scala. I'm saying any time spent learning Clojure when Haskell exists is a waste of time and a half-step.
Everybody who hasn't should be learning Haskell, regardless of background.
Teaching somebody Haskell is faster than explaining why the 1,001 dumb things mainstream languages do are dumb. I don't want to waste my time explaining why null values are dropdead stupid when I can just show them "Maybe".
I find it funny to hear such grandiose claims about a language that can has yet to create a major killer project. That's why I can never bring myself to write Haskell. Show me the Storm, OTP, Datomic, Netflix, Whatsapp, etc. written in Haskell and perhaps I'll have a reason to change my mind. But until then all I see is C#, Scala, Erlang and Clojure shipping awesome products and the Haskell guys sitting in the corner saying "You're not doing it right!!!"
Haskell's community is a bit more thoughtful and less prone to self-promotion. That combined with the fact that you're not a Haskeller, you're not familiar with what they've built.
Another issue is that Haskell is somewhat weighted towards finance and they're a bunch that tends to be somewhat proprietary about their IP. Some exceptions (Ermine) exist.
Some notable projects that come to mind include git-annex and Parsec. Parsec has been preeminent parser-combinator library since forever.
Also: pugs, pandoc, gitit, Darcs, xmonad, Idris (same person that did whitespace, haha)
Recently? Cryptol. That's pretty major.
I'm building stuff in Haskell, right now. Specifically, the easiest to use Elasticsearch client in the world.
One was needed. Badly. The ES API's data structures were never specced out properly, just documented on an ad-hoc basis. The Haskell library I'm working on will, if nothing else, mean that there's a reference for how the JSON is structured.
Ah, I thought I knew that username from somewhere. The negative-nancy is Timothy Baldridge, an employee of Rich Hickey's company Cognitect. Cognitect is "the" Clojure company and represents the inner-clique.
Personal attack? What personal attack? Are you reading the same comment I am? He identified someone. If identification amounts to a personal attack, then that would seem to indicate severe problems with the image of the person being identified!
"The negative-nancy is Timothy Baldridge, an employee of Rich Hickey's company Cognitect. Cognitect is "the" Clojure company and represents the inner-clique."
Apart from the juvenile "negative-nancy", there's also the clear implication of ascribing untoward motives based on the person's identity.
> Learn Haskell properly and then see for yourself why "hybrids" are a waste of time.
/rant
Getting tired of puritans. You people are selling functional programming on the basis of "ideology", not merits.
Can the following be done in Haskell ?
* A Real Operating System.
* GPU programming.
* Embedded programming.
* If you can do all the above, can you replace Verilog ?
* Financial Programming.
Ocaml is one of the "hybrids". Genetically impure,
since it has "refs". But they have Jane Street.
* Games worth playing.
assertion : If all the haskell programmers are put in
a gulag, they can't come up with a half-decent
game.
Nintendo Gameboy Games were written in assembly with
goto's.
Why don't you guys take a moment and pat yourselves on
the back ? Hypocrisy-2.0 is probably in the hackage.
Let's say you do all the above, GUI apps, distributed computing, <cool-buzz-word> ... without complains and ending up as a half-decent C++ or 1/10th lisp.
Haskell syntax is garbage.
(Common Lisp has a goto, gee what were they thinking ?)
Mathematicians pride themselves in their rich history of syntax.
Haskellers actually type "Arrow".
"I do consider assignment statements and pointer variables to be among computer science's most valuable treasures."
There are a number of toy operating systems written in Haskell. There are not very many "real operating systems" being written in any language, because it is a tremendous undertaking for uncertain value in a space that already has significant players with platform-effect lock in. I don't see why building a "real OS" in Haskell would be much harder than building a "real OS" in other languages, though.
I don't know anything about the state of GPU programming in Haskell, so I'm not going to speak to that.
Regarding embedded, GHC can target some quasi-embedded platforms these days, but it's also possible to write embedded programs in (pardon the overloading) an embedded DSL that can compile to C. Check out the Atom library.
I've heard about some people doing hardware synthesis involving (again) embedded DSLs in Haskell - I don't really know the state of it, but IIRC Conal Elliot gave a talk on some related stuff at one of the Bay Area Haskell meetups.
Jane Street is doing a lot of O'Caml stuff, and that's awesome, but there's definitely a lot of Haskell in finance these days as well (I've seen several job postings, and heard some chatter generally). I don't know which is better represented, or how it compares to other stuff - most big finance places don't talk about what they're doing inside.
Oh, and Haskellers actually type "Arrow" when they're working with a particular generalization of functions. When they're actually working with functions, they type ->.
hypocrisy: the practice of claiming to have moral standards or beliefs to which one's own behavior does not conform; pretense.
"GHC can target some quasi-embedded platforms these days, but it's also possible to write embedded programs in (pardon the overloading) an embedded DSL that can compile to C."
Haskell can't do any useful work without relying on C. Instead of having pride in what C can do, you have invented the following adjectives "hybrid", "mathematical language", "insecure", "impure", "pure".
Either,
1) Stop with the adjectives and get Real.
2) Prove that Haskell can replace C, the "impurity".
The same holds true for mathematical heritage. I would ask mathematicians to use Mathematica, not Haskell.
(I don't consider OCaml to be a part of this debate because OCaml is hybrid and Real, like C++)
You asked some questions, I answered them in an attempt to be helpful. Any conclusions you draw about my personal beliefs are spurious and roughly as well founded as your conclusions generally. For what it's worth, I write a bunch of C and I like C very much. I also like Haskell and have to say you really don't know what you're talking about here. I'm not interested in dealing further with these kinds of ramblings, at this point.
This is nothing but trolling. I wasn't disputing your definition; but you weren't including it as merely an informative point on a random English word. By the pragmatics of English conversation, it was clearly meant as an accusation in this context. Your retreat to defense of the definition, as opposed to any notion of the applicability of the definition or of anything else you've said, is disingenuous. In any event, at this point I am done responding in this thread, whatever else is posted.
For GPU programming - Haskell is a quite good option for running [parts of] your heavy computations on the GPU instead of the CPU, and the language features directly help with that.
And Verilog is quite well entrenched, but if/when it's going to be replaced by something else, currently it seems that it will most likely be a Haskell derivative, something similar to Bluespec perhaps.
But can you clarify your point ? The cause of the rant seems to be for the "hybrids are a waste of time" line, however, most of the examples you provide are things that are currently done by, say, C/C++ but not by the hybrid languages.
Are you claiming that the hybrid languages are (or will be) much more widely used for developing 'A Real Operating System', embedded programming or mass-market games?
Clojure's big idea however is not FP or OOP+FP+Lisp, it's data. All idiomatic Clojure programs will attempt to represent their app state via Clojure data structures (vectors, sets, and hashmaps). Functions are then the simplest way to do transformations from one dataset to another. Bonus points to those who also make those functions composable via more data. Then you start to build a system with real power.
But on a different note, the reason why I program in Clojure rather than Haskell is that Clojure is so pragmatic. The JVM gives me access to almost every library I need, the JIT and GC are the fastest on the planet (for this sort of language). But most of all, Clojure trusts me. Clojure doesn't attempt to slap my hand when I make an impure function. It doesn't say "hey we can't figure out what type this function returns". Because at the end of the day, I don't care about that stuff. I only really care about writing good software, that fulfills my goals. If that means certain parts of my code are impure...I don't give a crap.
At least that's the way it's been for me for 4 years working with the language.
and in regular Haskell you can get something like polymorphism (implemented under the hood with actual polymorphism) with forall-qualified datatypes.
Huh? Haskell's polymorphism capabilities are more expressive and more expansive than most other languages'. Higher-kinded polymorphism, a mainstay in Haskell, is pretty rare to find almost anywhere else.
I absolutely get the point about Scala, and hadn't looked at it from than angle before; can totally see that in a decade.
The difficulty with C++ does ultimately boil down to what you describe. But IME Objective C is an easier language in which to use OO features while retaining the flexibility of C where required - so it doesn't always have to the the case that a multi paradigm language fails. While ObjC is basically C + Objects + dynamic runtime, C++ has lots more stuff in there; I guess that's the kicker.
I don't know, but something tells me that it may take quite a while for Scala to fail in any noticeable way, at least according to google trends it's still the most popular functional language - http://www.google.com/trends/explore#q=%2Fm%2F03j_q%2C%20%2F...
Perhaps we can call this the Paul Phillips bashing-Scala-tour effect o_O
Hope Scala 3 addresses current issues with the language, which, if it happens, seems to be at leat 2 years away (i.e. 2.12 next year and possible Scala 3/Dotty in the 2016).
Since C++ is also still used plenty throughout the world, I think he might've meant that it failed to live up to the expectations placed on it as a good balance of structural and functional.
I always saw Scheme as pursuing its own form of purity, in a sense. That sense has little (perhaps nothing) to do with the notion of purity under discussion here, though.
The infix application function (ma>>=\a->f(a)), commonly called bind, executes the computation ma to expose its effects, calling the resulting value a, and passes that to function f.
This is the kind of imprecise language that really made life extraordinarily difficult for me when I first learned about monads. I think this is an important point:
>>= does not execute ma!
If it did execute ma, that would violate the purity of the language.
Instead, >>= takes the computation ma and combines it with the function f to build a new, larger computation that is composed of smaller parts. The resulting computation (and ma) might never be executed (depending on the rest of the program).
Before learning about monads in Haskell, you probably should learn a bit of Haskell first. Most tutorials assume that. So the fact that function application is lazy is assumed as prior knowledge, since any approach to learning Haskell would cover that before monads.
This has nothing to do with laziness, because laziness does not affect the semantics of a program that has bounded recursion. [0]
The problem is that the quoted part of the article is written as if the >>= operator had side-effects (whether lazy or not), and that's just plain false.
Now I agree that ordinarily, a student of Haskell has learned very early on that There Are No Side-Effects in Haskell, and should therefore not be confused. However, introductions to monads typically start out by stating that monads are how you can get side-effects in Haskell, and so they explicitly "deactivate" the No Side-Effects-assumption that students have. That's what causes the confusion.
(In fact, the moment I finally understood monads was precisely when I realized that a useful way of thinking about it is that Haskell code with monads does not have side-effects after all. This is totally obvious in hindsight, but it seems that the best way to get this point across in teaching material has yet to be found.)
[0] Obviously, this is only true in a side-effect-free language, but we're talking about Haskell here...
The semantics of >>= do not depend on the laziness of the language. >>= behaves exactly the same in Scala (although it's called flatMap there) as it does in Haskell. >>= does not execute effects, it only composes them. So to correct the article:
The infix application function (ma>>=\a->f(a)), commonly called bind, composes two computations. When the resulting computation is executed, ma is executed first, f is called with its result and finally the computation returned by f is executed.
I find it interesting that he's now advocating monads considering his earlier stance on static type systems[0].
Of course it may just be that he's changed his mind -- it happens.
(Yes, I consider monads as fundamentally requiring a static type system, at least if you're using monad transformers or similar advanced techniques. In practice you're not going to be able to get things right without compiler assistance when you have a stack of N monad transformers.)
I also find the bit on having a "pure" annotation vs. having explicit monads particularly insightful. The difference between a type system that only handles "pure/impure" vs. a type system that handles "pure/state-effect/writer-effect/network-effect/etc." is huge.
If "in practice, you're not going to be able to get things right..." than what the hell? That just strikes me as crazy.
Note that I am not necessarily against monads. However, this idea that they are both a good answer and require fairly extensive programmatic help seem counter.
I realize we can never reduce programs to things which are trivial and easy to comprehend. However, any new paradigm/trick that will always require compiler assistance doesn't sound like a step forward.
(Of course, in my mind a step forward are tools that don't necessarily need you to change your current languages and programs. Which is one of the things that annoys me with many new languages. Seems we always get a new wave of effectively solved areas of programming with incomplete solutions that are "cool" because they are in the new language.)
I'm not sure how that even follows from what I wrote.
Note, I am all for extensive static analysis. To the point that I am excited about such tools as Coverity and friends.
I am beginning to take exception to requiring ever more from the programmer. To the point that a programmer can't "in practice" specify a program correctly without a type checker. (Which... is what the parent post says. Right?)
I would much rather have it such that "in practice" we can specify programs without help. Since that implies that we can "in practice" read and reason about programs without extensive help, as well.
Sorry, didn't mean "compiler assistance", I meant "programmatic assistance" (as in your original reply). I hope my post makes more sense with that change!
So, it sounds like my post is still the more nonsensical. I'm not against any programmatic assistance. I am growing weary of the ones that require rather large stretches from the programmer to show dividends.
So, I would rather have a static analysis tool let me know that I am using data straight from the user, than I would generate a rather large type system that includes this. See GWT and the "SafeHtml" joy for an example of what sucks in programming.
Now, it can easily be argued that the problem there was Java not being quite strong enough, but even in languages like Scala, things can be difficult.
Of course, I have grown to love the Lisp world where people have pretty much agreed to write in the S expressions. Not because they are the most readable form, but because they really are ascii art of the structure of what you are trying to say.
So, yeah, I'm a jumble of conflicting feelings on this. :)
The average programmer would [...] because that's the way the program was written, as evidenced by the semicolon between the two statements.
Seriously? I don't even know C# and the "var q0" was enough to suggest the type of whatever Where returns is not an array of int (as opposed to the two functions above, with int and bool types), so why would I expect it to have filtered the array and returned it?
Ditto for the 2nd example: in this one it's even more clear that Select is returning IEnumerable<int>, not int[].
In C# the using statement causes the variable initialized at the entry of the block to be automatically disposed of when control flow reaches the end of the block. [...] surprising exception far away in time and space
There should be nothing "surprising" about that; maybe it is if you don't know C#. One of the first things that beginning C and C++ programmers learn rather quickly is never to return pointers to local variables or put those someplace where they'll need to be used after the function returns. How is this any different?
Imperative programs describe computations by repeatedly performing implicit effects on a shared global state. In a parallel/concurrent/distributed world, however, a single global state is an unacceptable bottleneck
Except that this "global state" is not really one thing, and it's not like all of its parts are modified by any one effect.
It appears that this "fundamentalist functional programming" the article is advocating is attempting to make programming more like math and distancing it farther from the real world by adding more abstraction, and if anything, I think abstraction is one thing that a lot of software these days needs far less of.
(Sorry if this is too rantlike, I have somewhat of a visceral reaction to these "the sky is falling!" style of articles...)
It is possible that Erik has a better (or more cynical) idea of the "average programmer" than you or I might. I'm of the opinion that so long it makes sense and has an elegance to it, then it's fine. If "average" programmers can't handle it, they can use another language or something.
I can't substantiate this, but my sense is that the generation of general-purpose, non-academic programming languages C++ lowered the barrier to writing software in the industrial context (think VB.Net, etc.). This can be seen as a good thing in these business contexts, but for those who are driven to be deeper and more concise in their problem solving, and are interested in solving more challenging problems, are only starting to realize that we've not been expecting enough out of our languages and compilers. Furthermore, the last two decades of software tool development have gained "lowest common denominator" accessibility at the cost of "dumbing down" the ways we generalists think about and solve problems. The first thing that picking up Scala did for me was realize I was expecting way too little from the compilers I use. Why should I be figuring out what the damn type of a value should be (while not throwing out types all together)!?!
Since Meijer is as an employee of Microsoft I'd extrapolate to say he's heavily influenced in his experience of the "average programmer" by the primary clients of his company.
I think his idea of "programmer" is closer to "mathematician" --- and indeed if you think in that manner, functional programming looks perfectly natural and obvious. The problem is how many of the programmers - and by that, I mean anyone who writes instructions for a machine - actually do think like mathematicians; and in my experience, not as many as the ones who can easily grasp the sequential, imperative model of machine execution. Not to say that thinking of computation as a function that takes an input state and produces an output state is conceptually interesting, but I think that's a bit too abstract for a lot of people.
I would just like to point out that the author seems to be confusing Pure-Functional-Lazy with just Functional.
I absolutely agree that if you buy into Lazy programming, you have to buy into entirely Pure Functional as well.
However, many languages and frameworks have demonstrated a high degree of success mixing in functional paradigms (mostly centered around collections)
I would like to refer people to the concept of Collection Oriented Programming. In this paradigm, application specify most of there operations as mapping and reducing functions across different collections (trees, vectors, lists, etc).
Not only does it promote safety, but it works in such high level constructs, it allows the compiler/interpreter to optimize the operation in many ways, such as optimizing out the lambda calls, and even parallelizing the operations.
To name a few Language+Libraries for which this is hugely successful: Ruby, Clojure, C++11 w/ std::algorithm, Scala, and Haskell, of course.
He isn't confusing the terms, he's just using the correct one.
Functional programming is about programming with functions - "Purely functional" is redundant. It should be obvious that Functional means functions, and "function" has fairly precise meaning which predates computation, and certainly didn't include anything about side-effects. Languages which don't use functions are not functional - they would be best described as psuedo-functional, nearly-functional, or mostly-functional, as he uses in the article title.
The only reason we've had to invent new terms like "purely functional" is because the original term has been abused to mean what it never meant - it was used to describe psuedo-functional languages, so we needed a new term to distinguish the two.
> Functional programming is about programming with functions
Actually I think the etymology of "functional programming" comes for programming with functions as first-class objects. So a "functional programming language" is one that supports higher-order functions.
> It should be obvious that Functional means functions, and "function" has fairly precise meaning which predates computation, and certainly didn't include anything about side-effects.
It also doesn't say anything about non-termination. If you want to program with mathematical functions I suggest you use something other than Haskell (e.g. Coq, Agda).
I disagree that '"Purely functional" is redundant'. You can have a functional language that still uses mutable state. Lisps are a good example. Pure functional implies no mutable state. Pure functional is the only way to safely achieve laziness. Which seems to be the author's thesis.
Well, from the parent poster's position, Lisps would not be "functional" - they'd be "pseudo" or "mostly" functional or something. Obviously this disagrees with common usage, but the expressed motivation "mutation disagrees with 'function' in mathematics" isn't crazy. I do think it's insufficient, however, in the face of common usage given that we've collectively chosen that "purely functional" should mean that.
I would just like to point out that the author seems to be confusing Pure-Functional-Lazy with just Functional.
Indeed. The problem with many of his earlier examples isn't using closures, it's mixing laziness with side effects.
Some of his other implicit assumptions seem dubious as well. For example, in the printf formatting example, he refers to "optimizations as simple as common-subexpression elimination", but again, if your subexpression has side effects, eliminating the duplicate isn't an optimization, because it explicitly changes the behaviour.
In any case, his basic premise is flawed. If it's really true that "the slightest implicit imperative effect erases all the benefits of purity" then we'd better abandon Haskell, because even programmers of the grandfather of lazy functional languages occasionally sneak outside for an unsafePerformIO while no-one's looking.
I disagree, because the entire article rests on the premise that imperative programs are bad because they rely on shared mutable state. Here's the thing, though: every complex-enough program relies on shared mutable state; even Haskell programs[1]. Pure functional code just might outsource shared mutable state to an external database.
The solution, then, is not doing away with shared mutable state, as that's downright impossible, but with providing transactional semantics for that state. Once shared state has clear transactional semantics, the pure-functional nature of the programming language becomes secondary.
Let's imagine a programming language with perfect STM (i.e. an STM implementation that executes the most efficient code possible in every transaction). That language would have none of the problems described in the article, even if it were completely imperative. Hence, the problem is managing shared mutable state, and not existence of side-effects in general.
Pure functional programming could have been one solution to the problem, if only for the fact that it's not even a solution at all: Haskell programs still need a database. But the article assumes no other possible solutions, which is wrong. It focuses on one particular solution (which isn't even really a solution), rather than exploring many approaches. It is simply begging the question.
EDIT: I do agree that some partially-functional approaches are inherently dangerous, but I do not agree with the conclusion that the answer is going pure functional.
[1]: Except maybe for compilers, which is probably the most common complex software built in Haskell.
In either case it does not follow that pure functional programming is the answer, especially as it makes "essential side effects" (which are, well, very essential), quite cumbersome. Clojure isn't pure functional, it makes essential side-effects easy, and non-essential (or, rather, dangerous) side effects hard.
I'm not saying that Clojure is the silver bullet, it's just that the article's conclusion does in no way follow from the premise.
I would strongly look at Clojure's implementation of Atoms, Agents and Refs. IMO that is the best isolation of mutation I've seen in any language.
What do I mean by isolation of mutation? As stated above, all applications end up needing global shared mutable state (usually it lives in databases, but doesn't have to). So Rich Hickey created mechanisms to do such things safely and concisely.
Exactly. Clojure is a great example, although it doesn't offer a complete solution: atoms/agents/refs are either not general enough or not performant enough to handle many data structures well.
Its an example of the "blub paradox": if you haven't used a pure functional language then its hard to see what the problem is.
The crucial thing about pure functional languages is that they decouple the logic of the program from the order of the computation. In an imperative language control flow and data flow are explicitly interleaved, with complex dependencies between the two. In many cases a particular bit of code is only correct if another bit of code has been executed previously, and its up to the programmer to keep track of all these dependencies.
In a pure functional language this coupling between data flow and control flow is broken because all the data dependencies are made explicit and visible to both the compiler and the programmer. That frees the programmer from bothering about it (and automating low level programming issues is always a Good Thing), and it also enables the compiler to optimise it. So for instance in Haskell the compiler will rewrite this expression
map f (map g xs)
into this
map (f . g) xs
The first line would iterate through the list "xs", building up an intermediate result list by applying "g" to every element. It would then iterate through this intermediate list applying "f" and building up the result.
The second line iterates through the list only once, applying "g" and then "f" to each element in turn. Haskell can do this because "f" and "g" are guaranteed by the type system to have no side effects, so it doesn't matter what order they are executed in. In impure languages the order of execution matters, so the compiler can't switch things around in this way without changing the meaning of the program.
The programmer also gets the benefit. If you see "x = complexThing" you can always replace "x" with "complexThing" and vice-versa anywhere that "x" is in scope, without changing the meaning of your program. That makes it much easier to reason about what your program does.
> Its an example of the "blub paradox": if you haven't used a pure functional language then its hard to see what the problem is.
The blub paradox has always been a stalking horse for condescension, IMO.
I've used a purely functional language, and I don't see the problem to be that significant. I see only benefits from traditional OO languages obtaining more FP features.
> Its an example of the "blub paradox": if you haven't used a pure functional language then its hard to see what the problem is.
Well, if a problem is so hard to see, maybe the solution to it is not all that important...
Like I said in another comment, I certainly believe that handling shared mutable state is a problem, but I certainly don't think pure functional programming is the only solution (in fact, I don't think it's a solution at all).
I am disciplined enough to keep my functions pure and use immutable data structures where they are useful. I try to apply each paradigm where it makes most sense. What is a pure functional way of implementing GUIs for example to use instead of MVC/MVVM patterns?
I think it is funny that the most hardcore group of FP enthusiasts seem to sympathize with constructive mathematics (e.g. proof-systems ala Coq) yet the law of the excluded middle is used in the argument here. :)
The notion that functional programming is somehow better than imperative or object-oriented is completely and utterly wrong. It has its benefits. In some situations, it's the best approach. But in most real-world projects I've come across, a mix of different paradigms is optimal.
The trouble starts when you try to get your mixed paradigms to fit well together. The original article was making the point that the cost of mixing functional with imperative code means that the functional code is crippled.
And I've never seen a project where a mix of paradigms was optimal. Any project, and any part of a project, can be tackled in a functional or OO paradigm.
The original article was about strictly side-effect free FP, wasn't it? Yeah, that obviously mixes badly with programming styles that use side-effects.
Most situations I come across are solvable with a reasonably elegant mix of programming styles.
I just don't get Erik Meijer, though I think he's quite entertaining. He seems to enjoy taking the other side wherever he is.
Here's a talk I was at a couple years ago where he makes fun of the ideas he's presenting in this article: http://www.youtube.com/watch?v=a-RAltgH8tw . Key quote: "obsession with monads is a medical condition".
Indeed, in last year's Reactive Programming course from Coursera, Erik's position was that one shouldn't be too fundamentalist about programming languages, and instead pick whatever works from each language.
He seems to be having a bit of fun with us all. That said, I respect a people that can change their minds.
>Unfortunately, just as "mostly secure" does not work, "mostly functional" does not work either.
I call BS. Pure OO and pure imperative has worked for half a century (a timespan in which functional languages have given us almost NO programs of importance, with the exception of Emacs, AutoCAD and a handful of others).
It's not like people have abandoned C/C++/Java/C#/Go/etc because they don't work anymore.
Plus, the need to get more out of multicore machines is quite exaggerated -- most programs can do just fine with just one core (if anything, they are unoptimized even for that). As for the others, programs like Premiere, Final Cut Pro X, Logic, Cubase, Maya, AAA games, etc, that is multimedia and number crunching stuff where performance is a premium, those are not done in functional languages (the particular examples are almost all C++).
As for high volume internet systems and services, those have found that Go/Scala/Clojure etc work well for them, to tap those cores.
Appealing to tradition doesn't really help to solve any problems we might face in the future - just because things work "now" doesn't mean they're great. Still, the examples you gave are kind of biased because you're only considering a specific kind of walled gardened sofware, which does one job, but has limited extensibility for further development (although this is intentional for most games).
It should be understood when Erik says "does not work", he is not saying these languages are useless or have no practical use today - he is suggesting that they are incapable of solving the problems of tomorrow. When looking for solutions to the problems we're facing now or in future, it's useful to have a look at how we actually build software - what are the "units" which make up the bulk of our software, and how do we combine them. Let's have a look through the decades and reason a little about what these units were.
1940s: Instructions
1950s: Subroutines
1960s: Procedures/Structured programming
1970s: Interfaces over data
1980s: Objects
1990s: Libraries
2010s: Services
future: ???
Obviously these are only approximations, but they give a fair idea of our industry's development. For example objects were in use before the 80s, but they were popularized by C++. None of these were new in their day, but they became the primary units of software which we use in our programs - because it's simply too much effort for anyone to write them all from scratch - we are all using other people's software in our own.
Each stage in this development is an attempt to simplify the previous one, by encapsulating, or hiding the implementation detail, and presenting a simplified interface for another programmer to consume. Part of the idea is that you shouldn't need to know the how the encapsulated system is implemented, you only need to consume it in the ways specified.
So while people are writing are building all this software on top of services now using Go/Scala/Clojure and whatnot - what languages are people going to be using a decade or two from now to combine these into bigger programs?
The suggestion of purely functional programming is one that removes the need to know how the program or service you consume deals with state, because effects are made explicit. The idea of purely functional programming as the solution to multicore/concurrency is a just a consequence of having explicit knowledge of state, because we need it to reason about race conditions.
We don't really know what the future will be like, but I imagine it will be one where programs are written to be entirely independent of the hardware in which they run - as they will be intended to run in clouds with heterogeneous architectures - other software will be making those decisions for us, but it can only make them if it can reason about their state. Which suggests we either need to make it explicit, or vastly improve our theorem provers to figure it out for us.
So many weasel words and strawmen in this article.
> Recently, many are touting "nearly functional programming" and "limited side effects" as the perfect weapons against the new elephants in the room: concurrency and parallelism.
Who is this "many", and when did they say it was "perfect".
I think the premise is silly too. Even if you don't get the full benefit of functional programming without a hardcore functional language, you obviously get some. Limiting side effects is almost always a good thing.
I am going to take the apparently unique position here (after 90 comments) that Erik is correct, at least about pure functional programming (the more modern sense of "functional" rather than the older one that is "merely" about first-class function objects). The value of pure functional programming comes from creating programs out of very mathematically-small pieces... a function of Int -> Int can only do so many things to the output Int, as compared to a function in an impure language which may be only able to do so many things to the output Int but may also arbitrarily manipulate the world in uncontrollable ways. The pure-functional function is exponentially "smaller" than the the impure version. Much of the study of the Haskell world right now comes in how to use these much simpler pieces to still build real-world programs.
If you "pragmatically" say, "Oh, but this is so hard, let's just let ourselves use a little bit of arbitrary-world-manipulation in our functions", you've basically returned back to the original world of programming with exponentially-complicated pieces again. As he points out in the article, even with a tiny crack in the wall, the compiler is back to being unable to assume purity. Programs must once again function as if an Int -> Int closure might read from the disk or hit the network. You're really back in the world of OO + old-school functional addons. I'm a bit more pragmatic and will agree that's a nice and useful paradigm, especially if you've learned discipline from time in the pure-functional world, but it is not the pure-functional world, and you will not reap the benefits.
(Which A: yes, they do exist and B: no, they aren't necessarily "mandatory" or the "only way to program". But still, see A. Personally I'd keep pure functional around for either applications that need high quality assurance without breaking the development budget, or programs of high complexity where the state space is big enough even before you start using horrifically unconstrained pieces to try to build your solution.)
I'd rephrase the title a bit... Mostly functional programming is not pure functional programming, and whereas I think "pure OO" didn't have a sweet spot where you insist everything is 100% OO, pure functional programming does. It isn't the only sweet spot. I think careful use of a very-not-pure language like Go, where one merely uses convention to avoid shared state (a very "pure" idea), can still be a sweet spot on its own. But there is another, where you go "purely pure", and for that one, you really do have to go purely pure, or you're not using it... and if you've never done it yourself, because you've only used impurely pure languages, you don't really have an opinion yourself because you've never tried it. Very nearly the only practical way to try it right now is Haskell.
Now that I've read your post, I'd say that impurity is much more similar to "goto" than with object orientation.
A language with "goto" is not structured. It does not matter how often it's used, or how similar the rest of the language is to a structured one. The same is true for side effects.
(Funny thing that the most used language has both.)
If code is structured or not is not really a property of a language, every function call is effectively a "goto" although with more convenient syntax. If your functions are partitioned in a strange way, you can just as easily produce spaghetti code. Same with nested if statements. You can write well structured code in Fortran 77 for example, even though most standard control structures involve a goto. Absence of a goto statement is neither necessary nor sufficient for enforcing structured code. One real advantage is perhaps that the compiler has more invariants to work with.
There is an entire class of compiler optimizations that can only be done on structured languages. If you include a "goto" command in a language you must either do a huge amount of statical analysis to map your language into a structured one, or live without such optimizations.
The fact that some code do not use a feature of the language does not help the computer generating a faster program.
Also, no, function calls are not equivalent to "goto".
Compiler IL reduces all branches to the equivalent of "goto", so adding a few more is just no problem at all.
More important obstacles to optimization include use of exceptions (now that makes control flow complicated), memory aliasing (any write to a char * is a scheduling barrier), overly-defined int math (can't prove loop iteration count), and the fact that your compiler has no idea what the hell is going on inside an x86 chip anyway.
What you say is correct, though I would say that bundling up common patterns in syntactic constructs has direct advantage (you don't have to look carefully to make sure it actually fits the pattern you think it fits, &c).
Donald Knuth uses GOTO. It is perfectly safe if kept within a function: invoke a function, jump around like crazy inside it, exit to caller with return value. No problem.
Last I looked, jumping around like crazy was hard for compilers to follow and would cut short some optimization. Not an issue when Knuth was originally writing, much more of one today. It's possible that this has changed, though - it's been a while since I played with this in depth. There remains Dijkstra's issue that it can be hard for humans to follow in any event, which Knuth didn't really directly dispute - his position was that there remained some narrow use cases where the speed improvement was worth it, which is different than "jump around like crazy".
"Completely functional" obviously doesn't work as there would be no side effects aside from your computer getting warm. Every functional language has escape mechanisms that allow you to see what the program is doing. "Mostly functional" is as close as you can get to functional.
Erik addresses this in the article. What he calls "mostly functional" isn't FP with some carefully used escape mechanism, but imperative languages adding FP features here and there. He argues that the benefits of true FP get negated in hybrid languages.
Just an observation, Erik was one of the lecturers in the coursera course "Introduction to reactive programming" co taught by Martin Odersky, creator of Scala. Erik was teaching reactive extensions to Scala (a port based of his work at Microsoft). Course is highly recommended by the way.
I think, this is a marketing article (the author is working as a consultant now). With all due respect to Erik Meijer for his contribution to functional programming, this article disseminates FUD that you are using functional features of programming languages incorrectly.
In my experience, using imperative programming with elements of functional, is very productive, and I don't need to introduce monads everywhere to be more productive. Separating side effects, and making code as pure as possible, was a good practice in old style OO programming and will be so in the future, and introducing smart-sounding words for this which most of the readers don't understand (BTW, I do understand what monad is), is just a marketing trick.
Is this a satire? I mean this seems like a satire of the fact that the unfortunate framing of functional programming in terms of "purity" and "impurity" clouds a very abstract question with the intense instinctive reactions we have to questions of personal hygiene. Notwithstanding the fact that a programming language is "impure", you cannot catch anything from it. It cannot defile, pollute, or contaminate you. Nor is there any power that will reward you in this world or the next for your supererogatory devotion to "purity" in programming.
"Impure" doesn't have negative connotations in this context, and in fact "pure" and "impure" are common informal terms when discussing languages with or without control of side-effects.
C functions are really procedures, except when they have no side effects on global variables. Indeed, a C function can be regarded as pure even if it contains imperative state changing code provided that all the mutable variables involved are local temporaries that only exist for the duration of the function call. Provided that f(x) always returns the same result for the same value of x it shouldn't matter how it is implemented.
APL element-wise operators are pure functions in an imperative language that has destructive assignment. However, it is also well suited to being used to implement GPGPU parallelism with local temporaries inside imperative procedures that appear to be pure functions from the outside.
Another form of parallelism arises from pipelined dataflow where tasks can simultaneously work on different parts of the whole linear computation so that those waiting for the results (further down the 'conveyor belt') receive more data from a potentially unlimited input stream 'just in time' whilst they consume more new data from their supplier or the source.
Taking things in the opposite direction a purely functional program can be seen as being a frozen moment in time that is subject to extrinsically defined constraints. Much like a spreadsheet these values can transition between epochs so they become ordinary mutable variables in a high-frequency event loop. This is the basis of exploratory "live" programming as an interpreter 'reacts' to dynamic changes in its source. It is also a facility provided by Mathematica where it permits the user to manipulate a graph of some complex function through recalculations based on new values of some sliders.
Every technique has its proper place and the latency incurred by Erlang mailboxes in the pursuit of fault tolerance and convenient hot-swapping of distributed modules is less of a performance issue when there would be a delay anyway given that the code is running over a network of computers. Erlang solves every significant problem of concurrency and it is highly reliable, unlike C# or Visual BASIC for which he is responsible. He really isn't in a position to criticise.
Both FP and OOP are extremes. Really, you can get by with Prototypes and have something type-oriented rather than class-based like Barbara Liskov's CLU. These can be given the capability of operating like Actors to simply take advantage of multicore / multiprocessor / multicomputer architectures. It helps your clarity of purpose if your language encapsulates persistent state in these Prototypal Actors without silly workarounds like C++'s friend function to make it go faster. A lot of your global state can live 'outside' of the program only to be seen from within as a set of global constants that change every 1/60th of a second when the runtime is reborn as if run from scratch with a slightly edited source. This may mean that apart from the output pipe of your dataflow that you can only put stuff IN to the BLACK BOX and must trust it to create its own views as to its current epochal value, just as a videogame renders a new frame of animation. All of the proponents of FP and OOP and for that matter Actors seek to prove that all programs can be written just using only their newly hyped paradigm. This is typical of ivory tower academia unsullied by the necessary pragmatism of the workplace. If you admit that there is something good about Actors then you don't have to learn Monads. If you admit that 'cloning' is a cleaner solution than the often abused (for the sake of convenience) 'implementation inheritance' and come to realize you can easily recreate classes / interfaces as abstract prototypes (i.e. they are just a pattern) you can jettison a whole lot of distracting OOP terminology and inscrutable UML diagrams as the work of self-promoting "architecture astronauts". Yet, if you embrace symbolic programming as seen in Mathematica you get to optimise your algorithms whilst not losing an insight into how their individual terms are transformed in your super-accessible declarative 'executable specification', at which point you realise that a symbol with an unknown value (i.e. a conventional mathematical variable) which can only become attached to a value once (per epoch) can be viewed as awaiting dataflow from a pipeline without any extra syntax obscuring your intent - you just call a variadic function and let the receiver await a finite list of arguments and then if there are more in the stream waiting beyond those it has already taken it awaits the same number of parameters again.
Really, the whole is greater than the sum of its parts - especially when those parts (paradigms) are dovetailed nicely.
I've been working on my own multiparadigm programming language for many years and Erik Meijer just seems too damn bleak.
If you strongly restrict mutability (significantly facilitated in Scala with case classes), then OO and FP dovetail quite elegantly. The Kiama language processing library is a fantastic example of getting the best of both (there are many others):
That said, one of Scala most compelling (business) features--great Java interop--is also it's greatest liability. While I appreciate the great interop, Java's lack of state mutation controls in the language and JVM instruction set interfere greatly with the FP/OO impedance matching, generating misconceptions about the viability of hybrid FP/OO approaches.
From 1994 to 1996 I went from being a 80%/20% C++/Python developer to 90%/10% Java/C++ developer. My productivity went through the roof, but the lack of something akin to C++'s `const` references and `const` methods were a glaring mistake, one that remains to this day a shadow mandating contorted defensive programming techniques. Adding `final` almost made it worse, as it's nuanced and overloaded, and ultimately doesn't do what less-experienced developers think it does. Because of that one serious flaw, was I ready to go back to C++? Of course not, because I was able to render my ideas into working software at a faster pace--an extremely important factor--but I had move forward being ever aware of the language weaknesses and how to effectively ameliorate them.
Over the last year I've gone from developing in Java 90% of the time to Scala 75% of the time. In this transition I have seen a similar jump in my productivity (almost, but not quite as big as C++ to Java, and after a longer learning curve). However, have approached it with the same multi-dimensional awareness of one's paradigmatic assumptions and how they play with and against the language facilities.
For me, Scala was my gateway into the world of FP thinking, which radically changed the way I think about software problems. Scala has also had a profound impact in developing a deeper appreciation of the power of a more formal type system. Both of those paradigm-shifting features of the language have allowed me to be more creative, expressive and concise in my software writing, with bountiful rewards on multiple axes.
However, I've also stumbled along the way--becoming enamored of features I didn't fully understand, being too expressive when simplicity would suffice, being FP for FP sake, etc.--but I sure am glad I had those opportunities to stumble, and do so in a "fail-fast" manner. The process has been invaluable, and I'm a much better programmer today for it. I never entered the process assuming the FP/OO/CT academic visionaries or the Smalltalk/C++/Scala/Haskell/ML/OCaml language inventors offered me any "promises". They gave to the world constructs for others to think about and solve software problems, take it or leave it, to live and die in the ecosystem of ideas. I know it is my responsibility as a professional programmer to understand the pros and cons of those constructs and tools, weigh them against my goals, experience and intelligence, and go into a relationship with these tools knowing, I'm ultimately the one responsible for the final product, and need to know what I'm doing.
All this is to say, I don't think sweeping generalizations nor pointed nick-picks help in assisting people select the best language and paradigm for their problem at hand, understanding the strengths and weaknesses, and how to manage those trade-offs. It isn't, and doesn't have to be a "one-size-fits-all world" (as someone else here already referenced the great Stroustrup quote: "There are only two kinds of languages: the ones people complain about and the ones nobody uses."). And at the end of the day, the ultimate responsibility rests in the hands of the individual professional developer. In my developing awareness of the FP viewpoint I have most appreciated and benefited concretely from those pragmatic viewpoints in the middle. It is from the middle that one can more clearly see both perspectives, and from that develop a third, more holistic and encompassing viewpoint that harnesses the power of both.
Going to any extreme makes some things horribly difficult and going to another does the same for other things. So, optimally, multiple paradigms coexist in the single codebase, applied where they're most useful.
Functional programming with as many immutable bits as possible is definitely a good start. I generally do that for whatever problem I'm solving: I have a model for the data and then I write (if at all possible) pure functions to transform the inputs into meaningful outputs. But then you need know where to drop the ball and move over to some other paradigm that does something else right and merely drives the functional parts from the top level.
For example, such a data analysis library can be written with minimal state and using only pure functions but if -- and when -- you need some sort of an user interface so that the program can actually be used, an imperative/procedural approach is generally the most native approach because UIs are basically I/O. If you're adding a graphical user interface, you might use object oriented approach to build the UI tree which is probably the world's most idiomatic, canonical use for OO anyway. But even those are generally driven with an innately imperative event loop.
Also, note that the different approaches or paradigms aren't language specific either.
In the first stage, languages are tools that shape your thinking into accepting new programming paradigms but at some point you have a number of different ways of thinking in your head, and you can just forget about the languages they came from.
But in the second stage, you can just think directly in paradigms: you can consider different ways to build different parts of your program but you might actually use only one language to implement everything. You can write functional, imperative, object-oriented, and whatever code in C. Or you can use several languages with strengths in each paradigm, depending on what trade-offs produce the best engineering in each case.