Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Learning Haskell is no harder than learning any other programming language (williamyaoh.com)
292 points by nuriaion on Oct 6, 2019 | hide | past | favorite | 462 comments


The biggest lie about Haskell is that it's easy to learn. No it's not, and I do use it at work. Sure, it's not THAT difficult to get a basic understanding until you get to the usual Functor, Applicative, Monad stuff, which you can understand if you imagine them as context bubbles. Once you put something into a side-effect bubble (IO), you cannot take it out, so you're obligated to work inside of that bubble. This analogy should get you far enough. You're now ready to build toy projects.

But, even if you finish the Haskell Book(http://haskellbook.com), which is like 1300 pages, you're still going to be unable to contribute to a serious code base. Anyone who says otherwise is lying. Now, you have to understand at least 20 language extensions which you find randomly at the top of files {-# LANGUAGE ExtensionHere #-}. Now you have to understand how to really structure a program as either a stack of monad transformers, or free monads or anything else. Then you get into concurrency and to do that you have to understand how Haskell actually works, what non-strict computation does etc. etc. Otherwise you're going to get some nasty behaviour.

You think I'm done? Let's get to Lens. You can use Lens after a relatively short time of reading the docs. But to understand Lens? Very few people actually understand Lens.

Don't get me wrong, Haskell has spoiled me, and I don't really want to touch any other language (I still like Clojure, Rust, Python, Erlang). Once you get past that the language is a joy to use.


> You think I'm done? Let's get to Lens.

Lens is a library, not part of Haskell the language, it's also not particularly widely used. If you are going to conflate the ecosystem with the language, then we could equally talk about the complexity of dependency injection frameworks, AbstractSingletonProxyFactoryBeans, aspect-oriented bytecode weaving, and enterprise Java beans when discussing how much "easier" Java programming is.

I have programmed both Java and Haskell professionally on multiple codebases. I honestly found more ad-hoc and incidental complexity in the Java world. At least the Haskell libraries generally were consistent in the abstractions used. Java the language, also had a lot of hidden complexity, for example its memory model (required knowledge for concurrent programming).


Yes, you should consider all of those when you talk about how hard Java is to learn. Java is a notoriously complex language and one should expect to put a lot of hours into learning it.

Haskell has a different kind of difficulty. One must expect to feel dumb for a long time, because it's composed of hard concepts. Those are much simpler concepts, but they aren't any quicker to learn.


Understanding the Java memory model properly is not easier than anything Haskell throws at you.


Not to refute your findings but it seems unfair to call commonly selected aberrations not hidden complexity and the Java memory model as such. The only things you can compare is the complexity inherent to the task vs that which is added by the choice of implementation which is subjective based on familiarity.


I was calling out the Java memory model and concurrent programming in Java as being very complex. It's inherent to the language itself and almost impossible to change. I was only calling it "hidden" complexity as most people do not see an issue when writing single threaded programs. Concurrent programming I genuinely believe is easier in Haskell.


And also using the java.util.concurrdent package sidesteps most foot-guns involving roll your own synchronization.


Perhaps most, but not all. Even something as simple as a date formatter in Java is not thread-safe.


That was fixed with the Java 8 package java.time.format, https://docs.oracle.com/javase/8/docs/api/java/time/format/D...


Ok bad example, but it shouldn't have taken them 15+ years to fix it.


SingletonFactory is an oxymoron.

Edit: oh wow, it actually exists


That's not oxymoronic at all.

You have a Factory<T> interface. Depending on T and context, it may be reasonable to provide an implementation of Factory<T> that returns a singleton, something taken from an object pool, or a new instance of an object.

I haven't worked in an IoC-heavy world for awhile, but a SingletonFactory is neither silly nor an anti-pattern.


> SingletonFactory is an oxymoron.

It's not, you still need to create the one instance when it is first needed. It's possibly excessive abstraction, but it's not an oxymoron.


I've come to believe that "excessive abstraction" is practically a synonym for "object oriented programming". It takes an incredible amount of discipline to not abstract unnecessarily in an OO shop.


It's not just excessive abstraction, separating Singleton and SingletonFactory can even reduce encapsulation since you can't make the Singleton constructor private.


You can if it's in the same file.


A factory method returns instances, but there is no obligation for those instances to be created on demand or each time the function is called. Some Singleton implementations only initialize the Singleton object at first call, for example, and other implementation units the object at startup.


Come on it only took me 3 semesters of category theory and I was ready to use ‘do’


Thinking that you need to learn "Category Theory" in order write Haskell is like saying you need to read George Boole's 'Investigation of the Laws of Thought' (1854) in order to write a conditional statement.

In other words, it's not.

"Category Theory" is for mathematicians and people interested in highly abstract mathematical theory. It doesn't help you write Haskell programs.


Never did I assert the need to learn category theory to write Haskell. I was speaking of my own experience and if you can understand the underlying category and type theory that goes on behind the scenes implicitly and can write Haskell programs without ever thinking of the theory behind them that is awesome. Personally I don’t like languages that feel like magic so I strive to learn what is behind the abstractions and for me that meant getting a deeper understanding of category theory. Furthermore I feel as a computer scientist the more math I can learn and absorb the deeper my understanding grows on a multitude of subjects. Sorry if my post came off as flipped I was simply trying to use humour to accentuate the difference in the conceptualization of Haskell vs. a non-pure procedural language.


You need it for a deeper understanding of type classes and algebraic data types. You can get by without it but I would say your understanding of things like a "functor" will be flawed.


This is not true. For example, The typeclassopedia contains all you need to know about functors and doesn't go into detail about the underlying Category Theory.


Agreed. Having worked closely with them, I would say such Haskell luminaries as Simon PJ, Lennart Augustsson and Neil Mitchell have not "learned category theory" (and I don't think I'm being too offensive to them if I state that). In fact the only figure of highest repute in the community who puts much stock in category theory is Edward Kmett.


excerpt from typeclassopedia:

"The wise student will focus their attention on definitions and examples, without leaning too heavily on any particular metaphor. Intuition will come, in time, on its own."

typeclassopedia admits that the intuition is missing. I would go on to argue that just going from the definition alone you would think that a functor is anything that is mappable and that fmap simply maps a function across a functor to produce a new functor.

The intuition that cannot be grasped without some category theory is that fmap actually lifts a regular function of standard types into a function between functors. A functor is more than just fmaps.


you draw the wrong conclusion from that excerpt. It does not say "Intuition that cannot be grasped without some category theory". It is saying that you need to see many concrete examples to learning something abstract. This is how it works in all of education basically, and there's isn't an easy substitute. Learning "category theory" will not replace your need to see concrete examples to grok something inherently abstract. You'll have to go through the concrete examples anyway.


Those concrete examples aren't visible until you do category theory. That is what I'm saying. You are drawing the wrong conclusions.

There is literally not enough information from the definition of the type class Functor and from examples of usages of that definition for a programmer to truly understand the concept of a functor. That is the conclusion I am deriving. It is not wrong. You are wrong. What's going on is you are deriving a conclusion convenient to your view point.

Sure you can get by programming haskell without category theory just like you can program without knowing the notion of an algorithm. However in both cases you are worse off without the knowledge.


I was able to use `do` with only C programming experience under my belt and a little LYAH. Cmon now.


Do still bugs me because it obscures what's really going on. I'd rather work with monads directly, even though it requires more verbosity.


How do you even take 3 semesters of category theory?


By failing twice.


>Sure, it's not THAT difficult to get a basic understanding until you get to the usual Functor, Applicative, Monad stuff, which you can understand if you imagine them as context bubbles. Once you put something into a side-effect bubble (IO), you cannot take it out, so you're obligated to work inside of that bubble. This analogy should get you far enough. You're now ready to build toy projects.

The analogy with Promises, which by now everybody knows, even if pedantically flawed because they don't fit this or that criterium of Monads, is quite useful to get the idea...


> which by now everybody knows

I wouldn't make that assumption. Between people who program in languages that don't offer promises, and people who use promises but still get them wrong, there's not exactly what I'd call a strong base of understanding.


> You think I'm done? Let's get to Lens.

Couldn't you also say that about C? If you think K&R C is enough of an introduction, wait until you see the Linux kernel!


Or those 200+ UB use cases.


I agree. I've never used Haskell at work (apart from some toy projects I did completely on my own as proof-of-concepts and/or quick-fixes) but I did spend a fair amount of time learning it myself, and that's what it took to get to the point where I could understand what was going on in "real" projects, let alone contribute at that level.

And, yes, Lens is where I stopped. I grokked the basic mechanism and used the library in some limited cases but a real understanding of the whole zoo of Lens-related types always eluded me. Doubtless I could have figured it out given time, but working on my own toward my own goals it was a real slog. So I started learning Rust instead. :)

Oh, and TH is an absolute nightmare. Just putting that out there. No, I don't know how I'd do it better.


As someone who is very interested in the experience of people learning Haskell I'd like to ask a question. Why did you stop learning Haskell after lens eluded you, rather than just turning your attention to some other aspect of Haskell instead?

(After all, I've been interested in lens since it first came out seven or so years ago. I'd consider myself an expert in that type of technology and yet there are still parts I don't understand!)


I felt that I'd reached a point of reasonable proficiency with the language, and since I'd been learning it mostly for my own entertainment I decided to turn my attention to some other new hotness (Rust, IIRC) rather than slog through increasingly more difficult and obscure (to me) features of the language with diminishing returns for my personal projects.

Entertainment is maybe not the right word, though it was definitely entertaining. Learning Haskell was the beginning of a personal renaissance in my approach to programming, and as a self-taught programmer (and professional software engineer looking to expand my horizons) that was a huge deal.


> I felt that I'd reached a point of reasonable proficiency with the language

Ah, well that puts quite a different spin on things!


What, you think I was wrong? :)

(that very well may be the case)


No, not at all! I previously thought you meant "Haskell was hard for me to learn because I couldn't even learn lens" when you meant "I learned a lot of Haskell but couldn't be bothered to learn lens"


> Anyone who says otherwise is lying.

Anyone who has experienced things you didn't should be discredited, because obviously your personal take is the definitive take about it.


But that is true for many other languages of the similar caliber, like Rust and C++ are also extremely complicated!


This. Remember that to be functioning in Python, C++, and many other languages you don't need to learn most if not all concepts. That is not true for Haskell where if you know a lot but not everything, you are likely running into trouble quickly. Example from my own experience: I've read 1/5 of "The Haskell Book" and know the essentials pretty well, but throw mature Haskell code (as in "something people made for a real purpose") at me, let alone monads, and chances are I'm blown away.


But it I throw mature Python/C++ code at you, with concepts you don't know, how is it easier?

Those Haskell threads are full of exaggerations. I don't know/use a lot of Haskell concepts and I can still produce software with it. You can be just fine with IO and passing everything as arguments. Which is, well, what article is talking about.


It's easier because Python/C++ have huge communities at this point and Haskell does not. It's also easier because most people learn some variety of Java/Python/etc as their first language. So languages with similar structures and conventions are easier to grasp.

Every time someone has suggested I learn Haskell the discussion goes similar to suggesting I learn German. Sure German from a language perspective has some advantages over English in some situations. Some even argue that it's an objectively better language. But I live in Pennsylvania and speak English as a first language. Outside of moving to Germany/Switzerland/Austria, how do the advantages of German provide enough benefit for me to invest the massive amounts of time to become fluent?

Sure if we could turn back the clock on Computer Science education and have everyone learn lisp as their first language maybe we'd all be avid Haskellers these days and be better off for it. But given how history went it is "harder" to learn and less productive to work in due to external factors alone, regardless of how intrinsically easy/hard the language may be (which is entirely subjective).


There's still plenty of value in learning Haskell even if you're not going to use it daily.

I'd go as far as saying it's essential for anyone who likes programming beyond just a profession. Same with lisp.

You don't need to learn all of the language extensions or how to architect serious applications with free as the OP said but it's very useful knowledge and one of the pedestals from which all other languages should be judged.

Plus you'll understand why Idris and dependent types are an interesting future development in safety and language design. While also understanding the source and inspiration of many features in far more popular languages like JS and Rust. And there may be a real future in it via PureScript and other similar projects.


Why do you write Python and C++ in one line? C++ is an extremely hard and esoteric language. Python is very easy language. They are like two opposite points.


C++ can be quite manageable in a professional setting, where you get Qt or boost. Then it's like any other programming language.

I think the major troubles for beginners with C++ is that you can't do anything out of the box like work with files, create a directory or perform a HTTP request. Whereas python or java are ready to use.


Eh, I strongly disagree with this characterization. I did c++ for a few years at Google and am now back to developing in it, and I'd say it's probably my favorite language to work in a codebase that sets the right constraints on it. But that doesn't change the fact that it is replete with unintuitive footguns even for those comfortable/experienced with it, in a way that Python or Java absolutely isn't.

The closest thing i can think of in Python is passing a mutable object (like an empty list) as a default param value. C++ is littered with bug-prone landmines like that.


[flagged]


>Python is anything but easy, it is the same caliber as C++, just lacking implementations able to execute as fast.

Yeah, that's stretching it to absurdity.

Python is easy to get started, and easy to adopt any of the extra features (e.g. slots, metaprogramming, async, etc) piecemeal. And easy to read most codebases.

Haskell is not easy to get started, not easy to adopt the extra features piecemeal, and not easy to read most codebases.


Easy to get started, undoubtedly yes.

Easy to be a Python black belt certainly not.

While it looks like piecemeal to adopt metaprogramming, decorators, multiple inheritance, slots, operator overloading, extension of built in types, generators, async/await, their use combined in the hands of clever programmers, is anything but easy.

Doing Python since version 1.6.


>Easy to be a Python black belt certainly not.

Perhaps, but that's a different goalpost...

Plus it's still easier to be a Python black belt than a Haskell one...

>Doing Python since version 1.6.

Doing Python since 1.5 (at university) and 1.6 professionally. A remember when it didn't have half of its current constructs...


> Haskell is not easy to get started

Agreed

> and not easy to read most codebases.

Hmm ... debatable. I have to dig in to the implementation of Python libraries and Haskell libraries regularly. I'm much more confident that I'll come away understanding the latter than the former!

> not easy to adopt the extra features piecemeal

I can't see any evidence of this. Can you name a few Haskell features that can't be adopted in the absence of other features?


I didn't say that they "can't be adopted", but that it's "not easy" to adopt them (and specifically it's not as easy as Python or even Java, C#, Lua, whatever) -- because they come with a bigger mental burden...


OK ... can you name a few Haskell features that can't be adopted easily in the absence of other features?


How do you functionally manipulate an indexed mutable structure like a vector?

How about the common task of CSV file manipulation? Database access? (Simplest of FFI.) Not reopening nor reparsing the file every time you want to do something? At the same time, with known upper bound on memory use?

Note how almost none of standard CRUD and web stuff is easy to write in Haskell from scratch and libraries do not help a lot.

You always end up in some variant of IO monad, typically multiple incompatible ones at the same time, making the pure functional nature of the language moot. You get to glue the various kinds of IO explicitly.

Haskell feels like writing a CPU (high level state machines everywhere), except with less readability and more verbiage than Verilog. The propensity of Haskell programmers to abbreviate everything and add redundant "helper" functions under different names does not help.

Programming is stateful because world has a state, and Haskell's handling of state is annoying at every step even for someone versed in it.


Firstly, the question was very specifically to coldtea to help flesh out his/her claim that it is "not easy to adopt the extra features piecemeal". So far that claim doesn't seem to have be substantiated.

Secondly, are you really saying you believe that Haskell doesn't have all these features? That there aren't good ways of mutating arrays, writing CRUD apps, combining effects, etc.? Presumably then, your jaw would hit the floor if I could demonstrate that everything you believe is false.


>Python is anything but easy

There's plenty of shade you can throw at the good old snake, be it package management, performance / speed, formatting peculiarity but this one is the most unlikely I can think of.

What does it make it not easy in your mind?


Python allows for very creative programming, just because every feature looks easy in isolation, when used together they can open the door to some head scratching.


I think "easy" is the biggest bait in the industry.

Python is a language where, by default, a typo is a potential runtime error. That's far from easy in my book.


Ever mistyped an output file name in Haskell? This is one of those myths about static typing that really gets under my skin. I worked in a large, hardcore Haskell production environment for several years. There was no difference in the amount “silly typo breaks something later at runtime” types of mistakes, none, between that and the decade or so of production Python experience I have.

These bugs enter your system and manifest in such weird ways that it always will be the job of unit and integration testing, not static typing, to catch them. Not with modeling states in the type system. Not with phantom types. Just nope. Frankly to me this is what distinguishes a senior engineer from junior engineers in statically typed languages. Do they understand the language design faculties don’t actually protect them, abandon the misguided idea of encoding protection into the language’s special faculties, and instead put that effort towards making the testing infrastructure easy to understand and update and very fast to run.


> There was no difference in the amount “silly typo breaks something later at runtime” types of mistakes, none, between that and the decade or so of production Python experience I have.

This is fascinating. You are basically the only person I know who has used Haskell extensively who claims this. Have you considered writing up your experience as a blog post (or even more formally as a technical report)? I think it would be extremely helpful to the programming community and particularly the Haskell sub-community for you to share your point of view.


Briefly, there is something very similar to Amdahl’s Law for parallel speedup but for removing the thin layer of defects checkable by static typing. Most defects in any real system aren’t like that, to such a degree that the whole correctness bottleneck is concentrated so heavily in unit and integration testing and the extra language complexity, extra lines of code for type annotation or registry of type system designs, slow compile times or constraints on mutation imposed by the static typing don’t pay for themselves through meaningful defect reduction. It’s like the cost of shipping data to a GPU. The efficiency gained by processing it in parallel on the GPU device must be much greater than the transport cost, or it’s not worth it.

But in terms of me ever wanting to write this up with rigorous technical examples, I mean, just look at the level of discourse and tribal downvoting in a thread like this.

Even setting aside that this experience was spread across a quantitative trading company and in a large public financial technology company, meaning I definitely can’t publicly share a lot of details of those systems (which adds tons of required effort to convert examples into totally isolated tutorial-like standalone samples), why would anyone with a valuable technical dissenting opinion about Haskell want to open themselves up to that kind of religious backlash?

It’s demoralizing and discouraging for me even just in a thread like this one, where I’m just some mostly anonymous commenter talking subjectively about my experience in small comments.

There’s no way I’m sticking my neck out on a big technical blog post or technical paper about why leveraging a static type system doesn’t meaningfully reduce defects in real systems.

Also to be clear, I think static typing is fine. Some people enjoy it a lot or have clever ideas about using it for expressiveness. Some people also write amazingly concise dynamically typed code that covers a huge variety of use cases in a safe way with pretty much no overhead code to register anything at all about those use cases. People are free to choose their tools and whatever gets a job done is totally fine.

The part I find disingenuous is that it seems like only the static typing zealots are trying to come up with a reason to think a certain way of doing things strictly dominates or supersedes a different way of doing things, and it’s totally disingenuous to act like the benefits of static typing on defect rates would be such an argument for “universal” applicability of one certain paradigm.


To be fair, you are expressing an allegedly subjective account of your personal experience in what to me sounds like an overly assured and conclusive manner, coupled with expressions like "static typing zealots". It reads a bit like you are imply that there are only zealots and then there is this account you're sharing, which is the definitive truth.

It similarly turns me off into trying to discuss this constructively, even though my experience of Haskell is almost a complete opposite of what you're saying here. I guess that leaves space for only low-effort discussion and people happy to hear that Haskell turns out to not be worth it after all.


My opinion is not allegedly subjective, it is subjective. I don’t expect anyone to do anything with the comments I write. They don’t prove anything, but someone may find it useful to hear that a person with experience decided to have a dissenting opinion of Haskell in practice.

I will say, however, that just as I mentioned in my comment, I’m willing to say static typing is fine. There are lots of tools in a toolbox. It is one of them.

I don’t believe a lot of commenters who seek out this discussion would give a similarly charitable view of dynamic typing, and in my real life experience, these are people who superficially dismiss projects written in dynamic languages, especially Python, on parochial grounds not rooted in reality.

In other words, I see a lot of people in the Python community in real life saying, “Haskell is cool, you can do expressive things in it, but it makes certain other things hard and so for a wide range of tradeoffs I wouldn’t pick it.” But I see people in Scala, Haskell, Clojure, F# etc., communities saying, “Python is crap, so unsafe, so many bugs, it’s just a categorically wrong way to design and write programs.”

So the discussion is (in my experience) extremely asymmetrical along these programming religion lines.


> I don’t believe a lot of commenters who seek out this discussion would give a similarly charitable view of dynamic typing, and in my real life experience, these are people who superficially dismiss projects written in dynamic languages, especially Python, on parochial grounds not rooted in reality.

Fine, but that's a criticism of the people not the language. I'm interested in the latter and not really in the former, unless you're trying to say that they are somehow linked.


> “Fine, but that's a criticism of the people not the language.”

I totally agree, and my dissenting opinion of Haskell is not based on anything about people or communities, just on ergonomics of using it and working on a big legacy codebase of it in production.

I mentioned the asymmetry of people who can be zealots about only one paradigm being The One True Way only in response to the parent comment I was responding to.


> It similarly turns me off into trying to discuss this constructively, even though my experience of Haskell is almost a complete opposite of what you're saying here. I guess that leaves space for only low-effort discussion and people happy to hear that Haskell turns out to not be worth it after all.

Let's hope there's another alternative: that those of us with seemingly opposite experience and opinion to mlthoughts2018 can encourage him/her to share more so that we can all learn something beneficial to our lives.


Yes. Especially because I am sure Haskell and similar languages have failure modes, in which the seemingly magical sauce I've personally experienced might not work, for one reason or another. I do believe that mlthoughts2018 worked in such an environment/codebase and it would be extremely useful to figures out what variables are involved in that.


Thanks for your comments. There does seem to be three languages sure to engender disparate opinions: Lisp, Prolog, and Haskell. I suppose that their advocates can be forgiven for their enthusiasm. They are all quite remarkable languages.

I am unqualified to assess the benefits of an advanced type system, after all, I've only worked through examples in a few Haskell books. I've never used Haskell professionally. Haskell is a lovely language. Its compiler is a remarkable achievement of computer science, mathematics, and engineering. Simon Payton Jones deserves the notable accolades and awards that he as received.

In my opinion, Haskell's most important contribution is in pushing the state of the art of programming languages forward. Is it practical? Yes, that too, but after all these years, it hasn't really become popular because being "practical" wasn't the main goal for Haskell. Haskell was designed to explore the non-strict functional landscape. Haskell's designers made good choices and were able to expand our understanding of non-strict functional programming (e.g. see Miranda [1]).

In the past I did years of research in program verification, so I'm naturally skeptical of the widely repeated claim that "Once your code compiles it usually works" (it's even on the haskell.org site). In what universe is this true? Verification that a program meets its specifications is quite difficult. In general, no compiler for Haskell can even verify that a program will terminate (the Halting Problem). I don't believe that real programmers are using Haskell's type system to formally verify total correctness (which includes freedom from deadlock, etc.) or even partial correctness (the weaker condition that if a program produces an answer that it is the correct answer).

I frequently write Python programs that work the first time; of course they are little scripts. It isn't dynamic typing that is keeping my programs from working more often. Consider the errors that I do make, syntax errors are caught by my IDE, I don't count those kinds of errors as real bugs. Next there are "type" errors, these aren't really troublesome even when I'm not using Haskell. I can find these almost immediately by testing or even using the REPL. (Every so often, I've heard of, say, a ruby program crashing once deployed because there is an untested path through the code that has a type mismatch between an argument and a function parameter. Haskell would catch this type of defect at compile time. That's good.) However, the really troublesome bugs are more subtle. Do the distributed parts have some kind of race condition? Am I handling the various spans of data within some vector correctly? Can an index touch memory outside of my memory segment? Is the floating point arithmetic doing what I think it should be doing? Have I translated the mathematics of wavelet compression correctly? Do I understand the Vandermonde matrix used in fast decoding of Reed-Solomon error correction codes--I don't! Haskell might be able to help with some of these harder bugs if the concepts can be properly represented in the type systems, but I believe that what mlthoughts2018 is saying is that this is often too hard to be worth it.

Haskell is a pioneering approach to programming, and the next frontier could be dependent types (see [2]). My own feeling is that someday programming will involve a dialog with a proof checker while coding. Writing proofs is hard, it seems harder to me than writing the program, so having an AI assistant that aids with the proof checking might make it more useful than simply struggling to encode a proof in the program's (dependent) types (see the Curry–Howard correspondence[3]).

[1] http://miranda.org.uk

[2] https://serokell.io/blog/why-dependent-haskell

[3] https://en.wikipedia.org/wiki/Curry–Howard_correspondence


My two cents here.

In my experience, dynamic typing has not caused unforseen bugs at run time.

What it does do, is it causes large codebases to become extremely difficult to reason about, as you could get very little information about what types are needed or received where, and program flow, from the code.

Where I work, the managers decided that everything shall be python or ruby. So we have some 10,000+ line codebases, which are very hard to reason about. Including industrial control programs. "garbage collection pauses? What are those?"


This just happens in every programming language. I can tell you because the large Haskell systems I worked on were also incredibly hard to reason about. The analog of garbage collector pauses was accidental misuse of eager evaluation, but buried in misdirection through a sequence of specialized implementations that get called due to type class.

Big codebases becoming ugly messes is sociological and pressured by bureaucracy. It is not something that stricter language designs can seriously mitigate, even a little. Meanwhile, very disciplined and experienced teams can avoid it in virtually any programming language.

Some of the cleanest and safest huge software systems I’ve ever worked with were written in C, C++ or Python. Also some of the worst huge systems I’ve seen were written in C, C++ or Python.


I find getting as much tooling as possible that will tell you about types is pretty important with a large code base. For example in the Python code getting type annotations along with mypy up and running tends to be a big win.


Mypy certainly helps, but it is not that powerful, and completely falls flat if you use an old library


That's a shame. There are numerous voices clamouring "Haskell's too much effort for its benefits to be worth it". You are basically the only voice saying "Its benefits aren't even benefits". It feels like you could really add something beneficial. If only there were some middle ground between carefully considered and reasoned critique and vague and unsubstantiated sniping on message boards, but so be it.


fwiw, I didn't read "it's benefits aren't even benefits". I read something more like "static typing helps with some problems, but those problems are just a sliver of the real problems." I also heard something like "static typing not worth the ceremony to me". Also, a general frustration with the ability of people of dissenting opinions to communicate meaningfully with each other.


I've not worked on Haskell but while reading about it I've always been suspicious that its type system is actually effective at preventing integration bugs. It's nice to hear this echoed.

I'd love to read more about this experience (good and the bad).


Well, I'm just pointing out that "easy" is not something that can be attributed to a programming language based on cute, pseudo-code like samples online.

You don't see "Python is easy" examples with full test suite attached to them, explaining that, well, you are in for a ride without those.

As for your comment: I can believe that production breaking typos may have been at a similar level. I don't believe that the effort to reach that level was the same, though.


In terms of mental concepts I will maybe give you Rust - lifetimes, and the borrow checker take some getting used to. But C++ doesn't really have any complicated concepts. Sure it has a lot of features, and some of them have complicated edges (ADL, template metaprogramming, etc.), but most application code rarely uses those things.


I think the complexity of C++ comes from the edge cases, unexpected interactions between features, conflicting syntax, and bad assumptions made over its long history.

I mean, the rule of three is just insane. How is anyone supposed to anticipate that behavior?


I’m not sure lifetimes are that much more complicated than the way C++ does implicit type coercion of user classes, resolution of compile time polymorphic functions, namespaces or SFINAE. I do agree it’s more like death by a thousand paper cuts rather than the torso-cleaving katana of the borrow checker. However I tend to believe any C++ codebase of consequence will run into some of this stuff, unless practices that avoid all the pitfalls are metoculously followed. Which implies a rather thorough understanding of said complexities.


Those are all advanced C++ features. Lifetimes smack you in the face 5 minutes into "Hello world".


Maybe not complicated in terms of how abstract it is, but pretty much everything in C++ is very complicated in terms of all the little rules and exceptions and subtle interactions between behaviors and compilers leveraging UB to do insane things. You can do a lot without thinking about it but if you want to have a precise understanding of things prepare to have to read a ton of rules. It's a language lawyer's dream language.


You may have a point with Rust, but you don’t have a point with C++, and certainly not with other mainstream languages like Python, Scala, Java, C#, and others.

In C++, you can focus on subsets of the language that eschew whole huge paradigms (like templating or inheritance) and you’re roughly no worse off in terms of the scope of practical programs you can write with basic software patterns.

You can’t do this in Haskell. You really do have to learn all the complex hierarchy of paradigm-committing patterns and use nearly all of them nearly all the time, making it much harder to learn than C++ even though C++ is complicated.

Languanges can be complicated in different ways, and Haskell is unique in that you have to engage with every complicated aspect of it nearly all the time.


> You can’t do this in Haskell. You really do have to learn all the complex hierarchy of paradigm-committing patterns

Err, no you don't. The only slightly unusual concepts Haskell 98 has (from a functional programming point of view) are monads, type classes and laziness. Haskell 98 is a perfectly decent language to write computer programs in. In fact it's even perfectly decent if you largely avoid monads and type classes!

The "you can focus on subsets of the language" claim is no weaker for Haskell than it is for C++.


> “The only slightly unusual concepts Haskell 98 has (from a functional programming point of view) are monads, type classes and laziness.“

Exactly. The way they manifest in Haskell requires huge time investment before you can write basic programs. For example, how do you write dynamic dispatch in Haskell 98?

If you give an answer that either (a) involves exotic use of type classes or (b) say “don’t desire dynamic dispatch in a functional paradigm and instead restructure the whole program to avoid needing it” then you’ve proven my point.

That fact that you think monads, type classes and laziness, being “three” concepts (except really they unpack into way more top-level concepts, especially the first two), means you have a huge blind spot. Your experience makes you think of them as self-contained things but they aren’t and even just those three things yield huge complexity sprawl in Haskell.


I'm sorry, I'm not sure what this has to do with the "you can focus on subsets of the language" claim being no weaker for Haskell than it is for C++. Perhaps you can clarify?

Your claim now seems to be "Haskell contains a great deal of complexity" or "You can't implement dynamic dispatch in Haskell" which are different things entirely.

(FWIW I'm not quite sure what dynamic dispatch is or whether I've ever needed it, in Haskell, C++ or Python)


Suppose you wanted to write professional software in Haskell using only the IO monad and modules of functions. How would you do it? I don’t believe it can actually be done in a serious way.

Suppose you’re using C++ but you don’t want any objects, exceptions or templating. Ok, you’ll be fine. It will be a lot like C, but you’ll be fine. Nothing will be substantially harder to solve, design or implement.


> Suppose you wanted to write professional software in Haskell using only the IO monad and modules of functions. How would you do it? I don’t believe it can actually be done in a serious way.

People write professional software in OCaml and Scheme so I hardly believe your suggestion is impossible.

> Suppose you’re using C++ but you don’t want any objects, exceptions or templating. Ok, you’ll be fine. It will be a lot like C, but you’ll be fine.

Sure, and the same applies to Haskell, except with Standard ML instead of C.

I'm not suggesting that one would be particularly productive like that, only that the level of complexity of Haskell is about the same order as the level of complexity of C++, and one can reduce one's subset of Haskell (all the way down to SML if necessary) in the same way that one can reduce one's subset of C++ (all the way down to C if necessary).

> Nothing will be substantially harder to solve, design or implement.

That surely can't be the case. If it were then objects, exceptions and templating would never have been implemented.


I must be missing the point here because if you wanted to “write ... Haskell using only the IO monad and modules of functions” you could simply write your modules (Haskell supports modules) and then write your IO. Done.

Is that what you mean? Or is this really about something else?


The article is called "you are already smart enough to WRITE Haskell" not "LEARN Haskell".

I think the point is that Haskell is quite powerful and useful without learning the whole of it. Basically, nobody knows the whole of it.

This is a perspective worth considering. The bones of Haskell are great and it's academic origins (god bless them) have it running around in knight's armor and a tutu (or something).

You could most likely use it effectively on a properly advised, disciplined team ("we don't use language extensions").

It might also get dressed up in other clothing and be what we're all using some day. The power and expressivity seem immense.

Imagine, for example, if the Rust documentation team got a hold of it. Holy crap!


I've only ever read half of LYAH and a few chapters of a couple other books. I don't know how Lens works under the hood. I've been writing Haskell for pay for several years with no problem. The rest of the stuff I should learned on the fly by reading Haddocks, misc articles, and plugging away with ghci & sketching ideas by hand (e.g. writing my own state monad for learning)


There is plenty to agree with here! But learning haskell isn't the biggest issue for me, I work in a large org and getting ppl to want to learn it with me or care at all is the biggest roadblock for me. The first thing they are looking for is nice IDE like and tooling experiences among many other things.


I used to want that too. Spacemacs has pretty good support out of the box if you’re willing to use stack. Vim w/ the haskell-vim-now collection is good too.

But soon after you realize that all you need is any text editor and a terminal (to run ghci(d)).


May I ask you about your drafting/recruitment process regarding haskell ?


You can say this about most languages. C++, Forth, Prolog, Rust, lisp, etc.

Most languages are easy to learn, and that's what the article is talking about. Mastery is a whole other topic.


Yes yes yes on the serious code base. Or even a non-serious code base: the problem is that everyone else's code uses these fancy features. So, for example, you read the docs to some open source library, and all the examples use these 20 language extensions. So congratulations, you can't tell what the code does without learning them.


> you're still going to be unable to contribute to a serious code base.

How is this different from Ruby with Rails, or Erlang and OTP? Or Python and Tensorsflow?

> Now, you have to understand at least 20 language extensions which you find randomly at the top of files {-# LANGUAGE ExtensionHere #-}.

I absolutely agree it can be super irritating when you find a new extension that radically changes syntax. I have made jokes and complaints you can find in my comment history on this very site about it.

But I don't think this is very different from C# or C++. Folks tend to exclude and create style guides for their language. At least Haskell has the decency to label these features explicitly.

Most of the really exciting libraries of 2018-2019, fused-effects and polysemy come to top of mind but there are many others, don't use terribly exotic extensions. My favorite web framework Spock also doesn't use anything to exotic either (and in fact uses nearly identical extensions to the more popuplar Servant, nearly).

So I think that the community is moving forward with a consensus on what the valuable and expected extensions are. What we could do to improve this is make the consensus more accessible to newcomers both by talking about it (and not in the context of Alexis's great extensions post last year [0] or Chris Martin's suggestsions and awareness-raising efforts (e.g., [1]). Hopefully the community can agree to raft a bunch of uncontroversial extensions together and say, "This is GHC 2020, we just agree this is the default language spec unless you tell ghc otherwise."

> You think I'm done? Let's get to Lens. You can use Lens after a relatively short time of reading the docs. But to understand Lens? Very few people actually understand Lens.

Is this any different from ANY data structure library an the majority of its consumers though? I still run into senior software engineers with amazing history who still don't understand things like, "What is the asymptotic runtime of a modern sorting algorithm" or "what alternatives to cardinality estimation could we use here than the default library bloom filter?" or more frustratingly to me, "Why this HAMT is not in fact constant time access even in practice for your specific use case, yes they're rad all praies Bagwell but it's the wrong structure for this case."

We could write a whole thing about sane uses of lens and ways to resist its excesses. Maybe now that I have finally quit twitter, I will do that this year.

[0]: https://lexi-lambda.github.io/blog/2018/02/10/an-opinionated...

[1]: https://twitter.com/chris__martin/status/1102457521380442112


You don’t need to understand the internals of a thing to use the thing.


This has not been my experience of using technology effectively. Without an understanding of the implementation details, you inevitably use something inefficiently or for not quite the right purpose.

I cannot imagine anyone using a database effectively on any significant amount of data without understanding indexes, how different joins work, why join order is important, what effect join orders have on performance, etc. Get to a certain scale and it's not enough to know about indexes; you need to understand the structure of b-trees, disk I/O performance, how CPU cache performance affects b-tree navigation even when index is cached in memory, how to use compound indexes effectively to reduce random access through the index, etc.

The constraints of CPU and memory never go away, and if you're trying to scale something, you're going to be limited on either or both of those resources. That in turn forces you to understand execution and memory behaviour of the abstractions you're working with. All abstractions leak when pushed.


> I cannot imagine anyone using a database effectively on any significant amount of data without understanding indexes, how different joins work, why join order is important, what effect join orders have on performance, etc.

But conversely you probably didn't have to understand what filesystem the database runs on, whether it is in a RAID array, whether the network connection was over Ethernet or T1, etc.

All abstractions leak. The question is how leaky they are. In my experience Haskell abstractions are much less leaky than most.


I did and do; when performance at the database level isn't right, you need to look into OS stats, I/O stats, disk stats (the SAN is an area ripe for unexpected contention, RAID5/6 act like a single spindle and inhibit random access in ways that RAID1 or RAID10 don't, stripe length in RAID5/6 bloats the size of small writes, etc.), but I had to stop somewhere :)


> but I had to stop somewhere

Why? You seem unsatisfied with any of the previous layers of abstraction, so where does it end? When can I truly call myself a user of a database?


When you don't have to fix it when it stops servicing requests, or worry about scaling workloads.

When you treat it as a black-to-grey box, not a grey-to-white box.


So I'm confused - how many years did you have to study before you felt confident enough in your grasp of all the underlying concepts to write your first "Hello World"?


Hello World doesn't push much to the limits.


I’m not much of a haskeller, but friends’ war stories indicate that laziness leaks quite a bit.


Thunk leaks happen, but they aren't that scary. They all pretty much have one of a handful of root causes.

It's definitely a wart of laziness, but it's also pretty easy to avoid.

As far as abtraction goes, laziness doesn't leak. In fact, it's a big reason Haskell performance composes. The head . sort example is contrived, but it holds true in more complicated & useful examples as well!


Aha, yes! You are right. But then so does strictness. See https://augustss.blogspot.com/2011/05/more-points-for-lazy-e... for some examples.

Laziness is a definitely a double-edged sword, with sharp edges.


I am in the Database-as-a-Service world.

I have to understand all the intricacies of the system, so people who choose to treat databases as black boxes don't have to. There is only so far this abstraction holds.

Stateless (immutable) code scales horizontally. ACID[1] doesn't.

[1] https://en.wikipedia.org/wiki/ACID


Fun aside, coworker recently had a fun issue where the database server would halt due to running out of disk space... except the disk had over 100 GB left.

Turns out NTFS has a maximum number of file fragments a file can have, and if it exceeds that it will refuse to grow the file (for obvious reasons).

Hardly everyday stuff though.


> I cannot imagine anyone using a database effectively on any significant amount of data without understanding…

You'd be surprised just what proportion of systems running in the market operate on amounts of data you would not deem "significant".

And as I said in another comment, Haskell doesn't try to pretend that computations run with no hardware constraints.

> All abstractions leak when pushed

Yeah. But we might have wildly different ideas for where that boundary is.


I wouldn't be surprised because I seek employment in areas where my skills are valuable.


This interesting, I work as a data engineer but never had to really understand how joins or indexes work because I work 99% of the time with denormalized data. I also do not know much about b-trees. I think you come from the database developer point of you rather than the database user point of view.

There is also other aspect of this, even if I do not understand joins or b-trees I can measure performance so I can figure out which combintion of joins are the faster. The reason that I prefer performance testing over getting to know the theorethical background for certain things is because in many cases your performance also depend on the implementation details, not only on which tree data structure backs your implementation.


It's exactly because the performance depends on the implementation details that when you understand implementation details, it can guide what you test and help find an optimization peak faster - or come up with a theory of another peak more distant, and journey through the performance valley to find it.

Implementation details follow the contours of the fundamental algorithms and data structures. You understand the outline of the implementation details - the bones, if you will - from that. And you can dive into the source (or disassembler - I've done that on Windows) to fine tune knowledge of other specifics, with e.g. gdb stack traces on the database engine when it is abnormally slow to focus attention.

Without having gone to battle together over some performance problems, or me writing significantly longer battle-story posts, we're probably not going to be able to communicate effectively here.


<rant> I work with a team that has a 16 TB database, who doesn't understand how it works. They seem really surprised when their queries run slow. </rant>


Do you struggle to use a computer due to not having an expertise in semiconductor physics?


This is exactly right. I use lenses all the time, but I have absolutely no idea how they're actually implemented, nor do I need to know.

This is abstraction. If there's one thing Haskell does well it's abstraction.

EDIT: It's really bizarre. We see these same responses to all the Haskell-or-Idris-or-whatever threads -- I wonder if there's some imposter syndrome going where "I can't immediately read/write Haskell" somehow morphs into "Haskell is useless". IME it's really rare for people who actually program in Haskell to have serious issues with the language. Yes there are issues from a smaller ecosystem, package management was bad (Stack fixed that), etc. etc. but there are very few fundamental problems with the language. Something so small as just Pattern Matching is a huge increase in productivity. Thankfully, quite a few languages have adopted pattern matching these days (Scala, Rust, TS, maybe even C++23?).

(The really big payoff comes from granular effects, but I'm sure the rest of the world will realize in about 20-30 years' time. The Erlang people already have, albeit in a different way.)


People look at weird syntax and discussions about things that have no resemblance to the problem they are facing, and conclude it must not be useful for anything real. Yes, those problems have no resemblance to any real problem because of abstraction, but most people's experience with abstraction is in a Java-like language where nothing good ever gets out of it.


You don’t need to understand the internals of a thing, until you do. Everything works fine as described in documentation until it doesn't for your use case. You might be lucky and find help from stackoverflow, otherwise you need someone who really grok it.


Haskell's internals are actually pretty easy to inspect!

- Haddock hyperlinked source makes it easy to understand libraries if you need to dig into internals

- ghci makes it easy to interactively learn the language - and a new codebase!

- Haskell can dump the stages of its codegen pipeline (Core, STG, Cmm)

- Profiling and Event logging exist and are easy to use

- You can even write an inspection test to assert certain behavior holds. For instance, that no allocations occur or that some abstraction (e.g. Generic) is truly erased by GHC's optimizations


Sure and at that point you need to learn those internals. That's just the way it is with everything, no?


When things go wrong, you might not have time to do so. When my project started to leak memory at enormous rate, I was able to find the issue quickly enough. But if I didn't know how all those things work, I would spend weeks or months learning all those things. Restarting application every 10 minutes for a week is not a good idea.


How is this different for Haskell than with anything else? If you want to be an expert in something, anything you do need to put in the work and learn it inside-out. There is no royal road. I fail to see how this is specific to Haskell...


It boils down to the steepness of the learning curves.

If I am in the business of system reliability, I will choose the language with shallower rabbit holes.

Abstraction layers are great for builders and terrible for fixers. I am both, so I need to strike a balance.


I agree with you, I don't think that it's different for Haskell.


There was this piece of common knowledge floating around a number of years ago about how you need to know at least 1 level of abstraction beneath you well, and have a working knowledge of the second one below it, to use your tools effectively. I don't recall where the advice floated around or came from but it was something along those lines, and it's pretty true.


Were you thinking of Joel Spolsky's Law of Leaky Abstractions[0]?

[0] https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a...


I don't think so, though it's close.


I’m not sure I agree with that.

How does this work in the context of CSS? Do people making websites need to understand how WebKit paints the screen?

The word “effectively” seems rather arbitrary here too.


I actually think you do need to understand rendering logic to some extent to use CSS effectively.

For example I have seen many having a hard time understanding why it is trivially easy to align an element to the top of the screen but tricky to align something to the bottom of the screen - something which would be symmetric and equally simple in a typical application GUI framework.

But understanding how layouts are generated makes this clear.


IMO the conceptual level below CSS is in the design sphere - the box model itself. You would not believe the number of people who hack together properties until they get something that looks right, without understanding the box model, who think they're really good frontend developers.


I feel like being able to find an exception doesn't mean the rule is invalid?


How many exceptions should I find to invalidate the rule?


Enough to show it's at least on the same order of magnitude as the number of situations where the rule does hold.


Only if you have infinite memory. In theory there's no difference between folding left and folding right (if associative), in practice there is a right way and a wrong way.


> In theory there's no difference between folding left and folding right

There's no difference in practice either for a sufficiently small dataset.

> in practice there is a right way and a wrong way

Sure, but that's true of all technologies.

Yes, Haskell can't help you escape the limitations of our world — or indeed our hardware — but it doesn't pretend to either.


> There's no difference in practice either for a sufficiently small dataset.

As a non-Haskell user, just for reference what's "sufficiently small"?


It depends on your needs. This is neither constrained to Haskell specifically nor functional programming more generally.

If you were building a website for your local Italian restaurant, what would your needs be? Do you need an ElasticSearch cluster to handle customer menu item queries? Do you need a database at all?

In Haskell's case it's best to avoid lists entirely, as they're _usually_ not the optimal data structure. But best for whom? Does the beginner care that a set operation would be more effective than a list operation for their given use-case?


Not sure how that's an answer to my question?

In my day job, I frequently generate records returned from a database along with local changes to be posted later, and say compute the sum of one of the columns. That sounded like something I'd use folding for, with my limited knowledge, so I was just curious at which point (order of magnitude) I'd have to worry about doing it this way or that.

But if lists are not to be used, what should I use for the above? And will the data structure you propose be fine with either fold?


Again, it depends. A left fold might be fine (and likely will be). Lists might be fine (and likely are). I don't know anything about the size of your data, or about the business you're writing software to support (who knows, maybe you're writing a FPS game, or some real-time aviation software!).

At this point (taking into account the extent of your knowledge as you yourself described it), my advice would be to just use basic folding and lists. At some point in the future, if you observe some performance issue in exactly that area, you might remember this and think "hmm, memory consumption is bit excessive here; maybe I'll try a foldr instead", or "finding the intersection of these two big lists is a little slow; maybe I'll convert them both to sets instead?"


>There's no difference in practice either for a sufficiently small dataset.

So what happens in practice with unfortunately large datasets?

>Yes, Haskell can't help you escape the limitations of our world — or indeed our hardware — but it doesn't pretend to either

Then what is Haskell's value-proposition when it comes to solving real-world problems?


> So what happens in practice with unfortunately large datasets?

You take a different approach. Haskell provides plenty of options.

> Then what is Haskell's value-proposition when it comes to solving real-world problems?

There are many. You can consult your favourite search engine to learn more.


>You take a different approach. Haskell provides plenty of options.

So do other languages. Why is Haskell special in this regard?

>There are many. You can consult your favourite search engine to learn more.

None that seem to address the problem of diminishing returns.


> So do other languages. Why is Haskell special in this regard?

I never suggested other programming languages don’t also have value. Again, if you want to educate yourself further on a specific technology’s benefits, I invite you to make use of a search engine instead of sea-lioning on a technical forum.

> None that seem to address the problem of diminishing returns.

That’s your opinion. Nobody is forcing you to like Haskell. You are free to just ignore it.


Another item is the crazy amount of compiler pragmas you have to use literally modify the meaning of syntax to get to a minimally viable state to work on a real project.


> But to understand Lens? Very few people actually understand Lens.

That's a lie. The basics of optics can be taught to even new Haskell programmers in an hour or so. Don't start in the deep end with generic optics with scary signatures like (Profunctor p, Functor f) => p a (f b) -> p s (f t). Start with something concrete like (String -> IO String) -> (User -> IO User) and then introduce the type variables one at a time. I've taught the basics of lenses, traversais, folds and other useful optics many times.


Coincidentally: it’s also the only programming language I know of where someone has written a lengthy blog post about how I’m in fact, not too dumb to comprehend it.


I didnt think about how hard Haskell is because I never forced myself to learn it. It just didnt interest me. With Java or C# I can make so many things with minimal friction. The other language that people talk about being hard is Rust. I am going to assume theres blog posts about it not being hard.

I like that with Go or Erlang everything I learned 5 or more years ago has still stuck to me. With D I can be effective quickly. With Rust I struggle a bit. Rust is probably great for building a web browser but doing backend web development feels way more work than Go or even Python (CherryPy). Haskell I dont even remember a darn thing anymore.


Haskell was used in the first programming class at university and that was excellent. Some people that already know how to program had to throw their preconceptions out the window. So everyone was either a novice at programming entirely or at least a novice at functional programming. It wasn’t problem at all. I’d argue most students wrote better code in that class than they did in the imperative/OO classes that followed. Especially the students that weren’t (and likely never did become) programmers.

The thing about imperative and OO programming is that it’s hard to do well. I honestly havent seen more than 1/10 developers write ”good” OO code even after 10 or 15 years as professionals. Large scale OO is a cognitive load that requires extreme focus, skill and talent. I prefer functional (or OO using an extreme functional discipline like all immutable types etc) because I don’t have that talent, focus and skill.

My point isn’t that Haskell should be the language of choice. I think it’s a great language but I think e.g laziness makes it too hard to reason about performance and behavior. Today I’d recommend F# I think.


And how many people dropped out of CS because of that first Haskell course? I saw plenty of people struggling with Python initially and I know exactly how they felt later when we had to learn Haskell.

I think if I had to learn Haskell first, then I would've just given up at the start.


Where I did my CS degree (Imperial College London) , Haskell is the first language taught (before Java and C).

To my knowledge across 4 years there, nobody ever dropped out because of the Haskell.

People usually struggled with the maths instead.

People also did not have more problems getting Haskell concepts right than linked list modification in Java.


They might not drop out specifically because of Haskell, but I don't see how it wouldn't be a contributing factor. The Haskell course here was pretty much just a struggle for almost everyone that took it.


Mind the opposite: finishing a bachelor's without ever having to deal with a functional language.

I've met a lot of people (at work and post-grad) who have graduated from colleges without an idea of what a functional language is or the concepts behind them. I don't blame them, but its a pity to find so many workarounds in legacy code which could have benefited from functions as parameters and less state in general.


I think they deliberately did some really “rough” math and CS classes up front because they knew some would drop out and they preferred them to drop in the first few weeks so someone could take their spot.


Yeah, my university did something similar. The very first computer science course was a bog-standard Java course - this was often an elective that engineers or science folks took also. The first CS-specific course used Haskell, and it probably turned off 75% of the people that tried taking it.


My university(Gothenburg University) did this for the CS course. Haskell does a very good job at levelling the playing field since it was new even to those who had prior programming experience. As I recall about 40 people out of 70 did drop out during the first semester.


I really hate the concept of "weed out" courses. That is the opposite if what colleges should be doing. We had plenty of them in the sciences almost 30 years ago when I started. We need to make everyone welcome in sciences, not weed people out with overly complex 101 courses.

We had Haskell in college too, almost 30 years ago, and even then it was one of the hardest courses at school. This was before IO monad, just as the language was being created in the early 90s.


Anyone who cannot do a Haskell class probably shouldn't become a software engineer. Better software that way.


Let's accept your basic premise, that it provides a meaningfully strong positive filter.

We only get better software if the people who don't pass the class proceed to not write software, as opposed to writing software anyway without the training available in the following courses (and presumably writing worse software) or being replaced in industry by others who could not have passed that filter.

If neither of those happens then we presumably will have better software. We will also have less software. It's not clear whether that trade-off is the right one.


Seems a bit harsh.


Of course it's harsh! But perhaps also true?


Or perhaps not so true.

I believe (based on exactly zero hard data) that FP fits the way some peoples' minds work, and procedural fits how other (many more) peoples' minds work. The ability to pass a Haskell class is not the same as the ability to program competently in at least one language.


Life is full of weed out exercises.

I managed to get through some of them, was weed out from others. That's life.


To be clear: I don't think anyone actually dropped out because of a tiny Haskell class. A lot of people did drop out because they hated math and failed the first couple of courses.


Lisp (Scheme) was chosen for the same reasons at my uni, but I think is a superior choice as the syntax is so small and it’s very appropriate for teaching.


Programming in any language or paradigm is hard to do well. Since vast majority of software that we are using is written in imperative and OO paradigm it is probably safe to say that it doesn't require any special developer power as you claim.


> I don’t have that talent, focus and skill.

Same. I'm perfectly happy to fall into the pit of success[1].

1. https://www.youtube.com/watch?v=US8QG9I1XW0


> Haskell was used in the first programming class at university and that was excellent.

I don't think getting started in haskell is that hard. If you're just doing theoretical exercises like building data structures and their related operations then it might even be easier than other languages.

Building a bigger application that does things like create a mini game or act as the backend for a web service, you know the cool stuff, becomes a lot harder.


Yes. I think we wrote a guessing game and a miniature parser etc. The concept of "web service" wasn't invented, sadly.


That just reads as:

"You are already smart enough to write Haskell"... if you accept that you are too dumb for most of Haskell and are therefore okay with being cut off from most of its ecosystem.

Digging into a dependency due to a bug or custom feature is something that usually happens on any bigger project of mine. If I have to expect that I won't be able to work in a dependencies codebase because it will most likely contain (multiple) concepts that I won't be able to understand, then that's a big no-no.


I have only done Haskell during college and my friend is doing Haskell in web backend in production so I'd say I have a pretty good feeling about the skill gap in Haskell. There are many concepts that he tries to explain to me which I don't get 100% but it really does feel like all I need is more time. It was like that at first with monads and then phantom types, row types, hkt etc. I'd say the biggest detractor for people is that it takes more than 1 blog post and 1 hobby project to comprehend everything. It may take 1-2 years to learn all the concepts. To me the beautiful thing is that the concepts learned aren't some language quirks but instead general programming/math concepts which you simply cannot think about in simpler languages like C#. With all that, I'm still heavily put off from Haskell by all the tooling and I just can't let go of the comfort of working in Visual Studio. In 10 years I hope either C# gets some of my favorite things from Haskell like HKT and better inferrence or Haskell gets better tooling and broader ecosystem. My bet is on the former but gafter keeps postponing HKTs spec after spec.


If you feel like messing around with Haskell again for an afternoon then install VSCode with ghcide (and a syntax highlighting plugin ofc). You might get surprised ;)


Are you using that? Doing Haskell on vscode has been a painful experience for me.


Yep, all my team has switched to VSCode+ghcide and so far we're in love with it. Mind you ghcide is brand new, so this luxury of having a 21st century Haskell IDE did not exists even just a few months ago. Before we used ghcid with whichever editor (eg. Vim/Emacs/VSCode).


Thanks, I'll switch to it now based on your recommendation. Desperately want a decent Haskell editing setup, the bad tooling is killing my productivity and making me consider quitting Haskell.

Edit: Ok so I tried it. Template Haskell seems to break it, which my projects use heavily. And does not even report syntax errors in some files. Seems like yet another dead end for me as an editor setup, unfortunately.


Ah yes TH. That’s imho the last biggie in the way of total awesomeness. See this ticket: https://github.com/digital-asset/ghcide/issues/34


Got it, I'll keep an eye on this. Thanks for this recommendation!


Have you written up a blog post about this? I think it would be really useful!


Agreed! For years I used Emacs with various Haskell tools, but after I tried VSCODE and ghcide, that is almost all I use anymore.


most people drive cars, and yet have very little understanding of how it actually works internally. As long as the interfaces are well designed, this works fine for most.

I say the same is for haskell.


That's fine if you use your car for commuting (= casual projects with no special requirements) but not if you want to compete on the race track.

Of course you can optimally keep out of the dependencies, and a lot of people don't even think about digging into them and unnecessary limit themselves, but as I said in my initial comment, every bigger project I worked on involved tweaking dependencies in one way or another.


To run with your analogy — which I don't find particularly constructive anyway — people who are just starting to drive don't start by competing on the race track.


With no prior experience with lisp, I was able to get comfortable in a clojure-only codebase in less than 3 months.


That’s great. Not sure what point you’re making though :)


I'd say rally driving is the best analogy. Not only do you have to drive the car at peak performance, you also have to perform ad hoc repairs halfway through a stage


No one has ever been able to explain to me why I should use Haskell instead of something else. I get answers about idempotence and list comprehension and strong typing which are great tactics but I never get the sense that they fit into an overarching strategy for how my life will be made easier by using Haskell.

I know Carmack has presented a case to code in a pure functional style here https://www.gamasutra.com/view/news/169296/Indepth_Functiona... but that somewhat precludes the necessity of switching to another language from C++. Further, he advocates for a new keyword 'pure' to assist the compiler like const does, perhaps not knowing that const doesn't actually help the compiler out in practice.


> but I never get the sense that they fit into an overarching strategy for how my life will be made easier by using Haskell.

I would add that the elitist signaling that is rampant in functional programming in general and the Haskell community in particular is also not helpful.


As a counterpoint when wanting to promote any of the "new wave" (even though Haskell isn't new) languages like Haskell, Go, or Rust for certain use cases it's become too common lately for people to dismiss it as evangelism (or indeed, elitism), and that is also unfortunate in my opinion.


I never got a sense of evangelism or elitism from Go advocates


Yeah, they don't need to evangelize it, they just have the critical mass to use it everywhere and the rest of us have to deal with their garbage.


Absolutely. "I'm just not smart enough to use mainstream languages" - "In fact we both know you are very smart. But, as we've just learned, also not competent enough to hide your elitist signaling in subtlety".


Wait, how is saying “I’m just not smart enough to X” elitist signaling?

If I tell someone “I’m not smart enough to use php”, I’m admitting a weakness. I genuinely don’t have the mental capacity to keep details straight when using it. If I tried to use php for at a job, I’d move at such low velocity that I’d get PIP’d.

Or like saying “I can’t write code without automated tests.” there are genuinely people who, if they try to write code without tests get stuck for hours and don’t know how to move forward.

I’d think someone this is the same sentiment. I’m willing to believe that the Haskell community is elitist in other ways. But how is “I’m not smart enough to X.” An expression of that?


"My brain is 100% pure logic so I can't deal with the unholy messes of the unwashed masses. In fact I'm so smart that I didn't even notice that many many things are much harder to write in this language that requires you to think about unrelated math things, because I solve them for breakfast"


How does "I'm not smart enough" turn into "I'm so smart"?


I didn't think it was that subtle, but different people have different sensors I guess.


It doesn't seem subtle. It seems like a dramatic transmutation from one meaning ("I have a need") into another totally different meaning ("other people are less worthy of respect").


I don't know, if you say "I'm just not smart enough" as a mathematician that is more interested in Functors and Monads and Yonedas than solving actual engineering problems, then there is definitely deception and signaling going on. IMO.


So, I agree that it is totally possible and maybe common for someone to ignore the business impact of their work. More engineers should read https://www.kalzumeus.com/2011/10/28/dont-call-yourself-a-pr... and understand that ultimately their role in their organisation is to Cut Costs/Risks or Grow Revenue/Impact. But being having a commercial or UX mindset doesn't make your brain more effective at using any given tool.

Imagine a programmer who, when she studies Haskell, discovers that the way it resembles category theory provides really effective mental affordances to her memory. She feels that Haskell's formalism makes it easier for her to have a clear mental model. When she writes and reads Haskell, she can predict what that code does to data and her predictions are largely correct. She feels confident in her understanding. This means that if she has a business goal she needs to fulfil, she's confident she can estimate that task, communicate about with stakeholders its complexity. and execute it.

Imagine that this programmer, when she starts to study PHP, does not find similar affordances. She finds it difficult to build a conceptual structure of it in her head. She finds that she forgets things or overlooks things when she tries to write PHP. If she tries to predict how a piece of code she writes in it behaves, she frequently is wrong. She feels very nervous about it. Consequently, when asked to plan adding a feature to a PHP task, she doesn't feel she's got a good enough grasp of her tools to answer. She worries about the risk to the business of her blowing past estimates and leaving making errors in production.

She tells someone "I don't think I'm smart enough to write PHP".

1) Is this statement a lie? 2) Is this statement elitist? 3) Is she a person who could exist?

What if her positive feelings about Haskell lead her to evangelise it in a way that ignores the fact that another person's brain might work in a different way to hers. She claims to this other person that Haskell is easy.

4) Does that statement make her elitist?

-----

My answers: No, No, Yes, Kinda yea.


"My coding practises are so good, I can't even write it the bad way anymore."


GP complains that nobody can explain to him why he should switch to Haskell, but when somebody tries, that's "elitist signaling"?!

You know what, use whatever makes you happy, while we continue to avoid success at all cost.


It depends on how it's done. If the explanation is "Because strongly-typed FP is the One Right Way, and you're stupid to use anything else", that's elitist signalling. If it's "because these certain kinds of errors simply go away, without other kinds of errors becoming more common", then it's not.


Yeah, the elitist attitude around the language is a shame. Haskell is actually pretty easy to learn and teach, and is very useful in certain domains.

Functional programming is just one of the paradigms, and useful in it's own domain. Its not a end all solution to everything.


> Haskell is actually pretty easy to learn and teach

What are you basing this on?


I must concede that it is my own experience


This is what has killed Scala. It was a fine pragmatic hybrid language which perfectly combined FP and OOP. Now it is attacked by hordes of type astronauts which will look down to you if you are not using Tagless Final, Cats, or whatever new fad is now in vogue.


If you (or anybody else) haven't given up on Scala just yet, have a look at ZIO

https://github.com/zio/zio/

It's a fresh, and pragmatic too, take on how to do pure functional programming, but without advanced concepts (like higher-kinded types).

could be (and maybe even has been) ported over to other languages besides Scala


Yes, Zio is really interesting. Unfortunately, its author has been recently outed from the community for political reasons (and there is too much drama in Scala community in general). I only hope that the dust will settle someday.


Just wait to see where it will take Kotlin with its Scala refugees.

https://arrow-kt.io/


The people making this library aren't Scala refugees - they're a Scala shop that uses Kotlin on Android only, so they decided to bring over their idioms.


Stand corrected, but doesn't change the fact of what is coming to Kotlin as well.


> a fine pragmatic hybrid language which perfectly combined FP and OOP

Ah, so Common Lisp then?


Really? Might i ask which places are you visiting, where you see this behaviour? I have rarely seen elitist behaviour in r/haskell or freenode #haskell. Mostly i see people complaining about eliticism on HN.


I guess that's the main problem.

And look, defining a problem functionally works great on 10% of the cases, but it complicates some 40% of other cases (very imprecise numbers).

The world is not functional, as it turns out. And a lot of those cases where you can write functionally don't gain much from a performance or correctness perspective compared to the procedural version


> The world is not functional

What does this even mean? If the world is not functional, then what is it? Is the world procedural? And what world are we talking about? Our planet, physically? Are you then discounting the worlds of mathematics and logic? You don’t gain performance? What performance? Program execution speed? Development pace and time to market?

Compared to the procedural version? Is your view that procedural programming is inherently better — for any definition of the word — regardless of context? Would SQL queries be easier to write if we told the query planner what to do? Is the entire field of logic programming — and by extension expert systems and AI — just a big waste of time?

So many vague aphorisms which do nothing to further the debate. And Haskellers are the ones getting called “elitist”!


>> The world is not functional

> What does this even mean? If the world is not functional, then what is it?

The world is full of special cases, of corner cases, of exceptions, of holes, of particular conditions, of differences for first or last element, and other irregularities; it is also full of sequential events. Applying a function to a starting set, to produce a resulting set or value is very lean with clean nice mathematical objects and relations, but not with many real-world problems.

> And Haskellers are the ones getting called “elitist”!

Well, one may say that answering with questions could fit the bill...


> The world is full of special cases, of corner cases, of exceptions, of holes, of particular conditions, of differences for first or last element, and other irregularities; it is also full of sequential events.

True

> Applying a function to a starting set, to produce a resulting set or value is very lean with clean nice mathematical objects and relations

True

> but not with many real-world problems.

Debatable, but in any case a non-sequitur. Are you sure you're talking about functional languages as they're used in reality?

I once wrote a translator between an absurdly messy XML schema called FpML (Financial products Markup Language) and a heterogeneous collection of custom data types with all sorts of "special cases, corner cases, exceptions, holes, etc.". I wrote it in Haskell. It was a perfect fit.

https://en.wikipedia.org/wiki/FpML


> The world is full of special cases, of corner cases, of exceptions, of holes, of particular conditions, of differences for first or last element, and other irregularities; it is also full of sequential events.

Yes. All of which are modelled in Haskell in a pretty straightforward manner. I’d argue Haskell models adversity like this better than most languages.

> but not with many real-world problems.

Haskell is a general purpose language. People use it to solve real world problems every day. I do. My colleagues do. A huge amount of people do.

> Well, one may say that answering with questions could fit the bill...

I see. So trying to define the terms to enable constructive discourse is elitist. Got it.

If you want me to be more concrete and assertive, fine. No problem. Here we go.

You are wrong.


It's not vague, processors are still procedural. Network, disk, terminals, they all have side effects. Memory and disk are limited.

SQL queries are exactly one of those cases where functional expression of a problem outperforms the procedural expression, and that's why they're used where it matters.


Are you suggesting functional programming is somehow so resource intensive that it is impractical to use? Or that functional programming makes effectful work impractical?

Because neither of those are true.

You’ve conceded that SQL queries are one case where a functional approach is more ergonomic (after first asserting that the world is not functional, whatever that means). Why aren’t there other cases? Are you sure there aren’t other cases? One could argue that a functional approach maps more practically and ergonomically for the majority of general programming work than other mainstream approach.


> Are you suggesting functional programming is somehow so resource intensive that it is impractical to use? Or that functional programming makes effectful work impractical? Because neither of those are true.

No, I'm saying the functional abstraction doesn't work or is clunky in a lot of cases and that Haskell's approach of making it the core of the language is misguided (Lisp is less opinionated than Haskell for one).

> One could argue that a functional approach maps more practically and ergonomically for the majority of general programming work than other mainstream approach.

They could argue, but they would be wrong.

SQL works because it's a strict abstraction on a very defined problem.

Functional is great when all you're thinking about is numbers or data structures.

But throw in some form of IO or even random numbers and you have to leave the magical functional world.

And you know why SQL works so fine? Because there are a lot of people working hard in C/C++ doing kernel and low-level work so that the "magic" is not lost on the higher level abstractions. And can you work without a GC?


> I'm saying the functional abstraction doesn't work or is clunky in a lot of cases and that Haskell's approach of making it the core of the language is misguided

Functional programming certainly doesn't work literally everywhere, but to say Haskell's design is "misguided" is your opinion, and it's one that some of the biggest names in the industry reject. How much experience do you have designing programming languages? Or even just building non-trivial systems in Haskell? Judging by the evident ignorance masked with strong opinions I'd say around about the square root of diddly-nothing.

> But throw in some form of IO or even random numbers and you have to leave the magical functional world.

Wrong. Functional programming handles IO and randomness just fine.

> And you know why SQL works so fine? Because there are a lot of people working hard in C/C++ doing kernel and low-level work so that the "magic" is not lost on the higher level abstractions

Are you suggesting there aren't a lot of people working hard on GHC? Because if you are — and you seem to be — then again you would be wrong.


> processors are still procedural

Exactly. I don't know why Haskell fanboys insist on abstracting us so far from the machine. Declarative programming is really unhelpful because it says nothing about the order in which code executes.

We live in a mutable physical universe that corresponds to a procedural program. One thing happens after another according to the laws of physics (which are the procedural program for our universe). Electrons move one after another around circuits. Instructions in the CPU happen one after another according to the procedural machine code.

The universe would never consider running 2020 before 2019 and a CPU will never execute later instructions before earlier instructions.

Haskell fanboys talk about immutable data structures like it's beneficial to go back in time and try again. But it's a bad fit for the CPU. The CPU would never run some code and then decide it wants to go back and have another go.


You're saying a lot of wrong things about CPUs. CPUs do execute instructions "out-of-order", and they do speculative execution and rollback and have another go. Branch prediction is an example.

All of this is possible only with careful bookkeeping of the microarchitectural state. I agree the CPU is a stateful object. But even at the lowest level of interface we have with the CPU, machine code, there are massive gains from moving away from a strict procedural execution to something slightly more declarative. The hardware looks at a window of, say, 10 instructions, deduces the intent, and executes the 10 instructions in a better order, which has the same effect. (And yes, it's hard for me to wrap my head around it, but there is a benefit from doing this dynamically at runtime in addition to whatever compile-time analysis.) In short, it is beneficial to go back and have another go.

This was demonstrated also in https://snr.stanford.edu/salsify/. If you encode a frame of video, but your network is all of the sudden to slow, you might desire to encode that frame at a lower quality. Because these codecs are extremely stateful (that's now temporal compression works), you have to be very careful about managing the state so for can "go back and have another go".

I am less confident about it, but what you say about the universe also seems wrong. What physical laws do you know take the form of specifying the next state in terms of the preceding state? And literally many of them are time-reversible.


Thanks. It was an attempt at parody but apparently I didn't lay it on thick enough. I'll try to up my false-statements-per-paragraph count next time.


What you wrote about CPUs many people believe, and many simpler CPUs operate like that. So it was difficult for me to detect as parody. Sorry that I missed it! It's funny in hindsight.

Not sure what the parody was in computing 2020 before 2019.


> Exactly. I don't know why Haskell fanboys insist on abstracting us so far from the machine.

I believe it is because abstractions are the way we have always made progress. Is the C code that's so close to the machine not just an abstraction of the underlying assembly, which is an abstraction of the micro operations of your particular processor, which in turn is an abstraction of the gate operations? The abstractions allow us to offload a significant part of mental task. Imagine trying to write an HTML web page in C. Sure it's doable with a lot of effort, but is it as simple as writing it using abstractions such as the DOM?

> We live in a mutable physical universe that corresponds to a procedural program. One thing happens after another according to the laws of physics (which are the procedural program for our universe).

You just proved why abstractions are useful. "One thing happens after another" is simply our abstraction of what actually happens, as demonstrated by e.g. the quantum eraser experiment [1][2].

[1] https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser

[2] https://www.youtube.com/watch?v=8ORLN_KwAgs


>I believe it is because abstractions are the way we have always made progress.

>The abstractions allow us to offload a significant part of mental task

Edge/corner cases in our abstractions is also how propagation of uncertainty[1] happens. You can't off-load error-correction [2]

[1] https://en.wikipedia.org/wiki/Propagation_of_uncertainty

[2] https://en.wikipedia.org/wiki/Quantum_error_correction


I'm not sure what you mean by "You can't off-load error-correction". In the case of classical computing, we do off-load error-correction (I don't have to worry about bit flips while typing this). In the case of quantum computing, if we couldn't offload error-correction, an algorithm such as Shor's wouldn't be able to written down without explicitly taking error-correction into account. Yet, it abstracts this away and simply assumes that the logical qubits it works with don't have any errors.


> Instructions in the CPU happen one after another

https://en.wikipedia.org/wiki/Superscalar_processor

> a CPU will never execute later instructions before earlier instructions

https://en.wikipedia.org/wiki/Out-of-order_execution

> The CPU would never run some code and then decide it wants to go back and have another go

https://en.wikipedia.org/wiki/Speculative_execution


Actually the FP fanboys, all the way back to those IBM mainframes where Lisp ran.

Even C abstracts the machine, the ISO C standard is written for an abstract machine, not high level Assembly like many think it is.

Abstracting the maching is wonderfull, it means that my application, if coded properly, can scale from a single CPU to a multicore CPU + coupled with GPGPU, distributed across a data cluster.

Have a look at Feynamm's Connection Machines.


> The universe would never consider running 2020 before 2019 and a CPU will never execute later instructions before earlier instructions.

Except thanks to a compile-time optimization.


And ooo-execution that happens at _runtime_.


This 1000 times.

The reason SQL (really - relational algebra) works so well is precisely because relational data is strongly normalized [1].

But the data is only a model of reality, not reality itself. And when your immutable model of reality needs to be updated strong normalisation is a curse, not a blessing. The data structure is the code structure in a strongly-typed system[2]

Strong normalisation makes change very expensive.

[1] https://en.wikipedia.org/wiki/Normalization_property_(abstra...

[2] https://en.wikipedia.org/wiki/Code_as_data


> The world is not functional, as it turns out.

Have you watched the talk "Are we there yet?" by Rich Hickey? He makes a convincing case, referencing the philosophy of Alfred North Whitehead, that "the world is functional".

http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hic...


I like hickey but a lot of his examples are data driven domains where we can indeed model the world in functional or accounting-like terms.

Where this falls short is domains that are extremely ill-suited to keeping track of past states over time (his identity idea) simply because it would break performance or be hard to model in those terms, say simulations, game development, directly working with hardware and so on.

Much of his argument relies on the GC and internal implementation being able to optimise away the inefficiences of the functional model that needs to recreate new entities over and over, but this simply is not always enough.

This also is very obvious if you look into the domains where Clojure or other declarative functional languages have success, it's almost always business logic / data pipeline work.

edit: And in fact a lot of his most salient points aren't really as much about functional programming as they are about lose coupling and dynamic typing. A lot of his criticism of state and identity in languages like Java isn't related to Java not being functional, it's related to Java breaking with the idea of late binding and treating objects as receivers of messages rather than "do-ers".


> it would break performance or be hard to model in those terms, say simulations, game development, directly working with hardware and so on

Performance seems to be the problem there more than the model not fitting. And Rich’s answer would probably be that Clojure isn’t the right tool for those tasks in the same way that any general-purpose GC’d language isn’t.

Modeling a game or sim as a successive series of immutable states sounds great to me, so I’m curious where you see a mismatch with them. Abstracting hardware properties this way sounds useful too, but I’ve not worked with it enough to comment.


>Modeling a game or sim as a successive series of immutable states sounds great to me, so I’m curious where you see a mismatch with them.

because I don't really think the functional model describes it in an intuitive way. You can model a sim or game at a high level like worldstate1 -> worldstate2 etc.. but it doesn't really tell you much, because often you don't really care what the state was a second ago anyway in particular not in its entirety, and because at that high level of abstraction you don't really get any useful information out of it, so there's no point in tracking it in the same way it makes sense to track a medical history or a bunch of business transactions.

Rather instead of thinking of games or sims as high level abstract transitioning states we tend to reason about them as a sort of network of persistent agents or submodules and that lends itself much closer to a OO or message based view of the world.

I think in many systems that are highly complex and change very incrementally reasoning about things in terms of functions doesn't really tell you much. You can for example reason about a person as like say a billion states through time but there's not much benefit to it at least to me.


That makes sense, and the actor model (well, entity-behavior, which is fairly close) has been what I reached for in the past when doing game dev.

I agree that looking at a whole worldstate at once is unlikely to be useful. But I do see great value in keeping state changes isolated to the edges of a system, and acting upon it with pure functions. If you have the means to easily get at the piece of state tree that you’re interested in, you can reason more clearly about how each piece of a simulation reacts to changes by just feeding the relevant state chunk to it and seeing what comes back. That takes more work to set up or repeat if each agent tracks its own internal state.

I haven’t used Clojure to make a game before, so I’m speculating here. I find myself avoiding internal state by habit lately, though of course “collections of relevant functions” are valuable tools. I just lean towards using namespaced functions instead of objects with methods.


A lot of what's done in software eventually ends up getting hardware support. Our current von-Neumann/modified-Harvard hardware architectures are old and it's precisely thinking outside these lines that will lead to non-local maxima.


In a functional world persistence (e.g memory) cannot be a thing.

The case is convincing, but it requires you to reject your own, human, faculties.

Makes the use/expression of language rather awkward when you can't remember any words.


> In a functional world persistence (e.g memory) cannot be a thing.

Why not?

To give a functional Haskell flavoured example, I can use the Reader and Writer monads (which are functional) to create a whole bunch of operations which write to and read from a shared collection of data. That feels a lot like memory to me.

Indeed, the Reader monad is defined as:

> Computations which read values from a shared environment.

I just don't understand the whole "you can't have memory / order of operations / persistence / whatever else" as an argument against functional concepts when they have been implemented in functional ways decades ago. The modern Haskell implementation of the writer monad is an implementation of 1995 paper.

Edit: it looks like who I responded to doesn't actually want to have a reasonable discussion, but for anyone else reading along, it's entirely possible to have functional "state" or "memory" - what makes it functional is that the state / memory must be explicitly acknowledged.

Trying do dismiss functional computation in this way is essentially a no true Scotsman; functional computation is useless because it can't do X (X being memory or persistence or whatever), but when someone presents a functional computation that does do X, it's somehow not a "real" functional computation precisely because it does X. Redefining functional computation as "something that can't do X" doesn't help anyone, and doesn't actually help with discussing the pros and cons of functional programming since you're not actually discussing functional computation but some inaccurate designed-to-be-useless definition mislabeled as functional computation.


>what makes it functional is that the state / memory must be explicitly acknowledged

Isn't state/memory assumed by default when talking about computation?

What kind of computations you can perform on a Turing machine without a ticker tape?


Lambda calculus, for example, has no ticker tape.


What kind of computations can you perform with Lambda calculus without any input variables?

What are you applying your α-conversion and β-reduction operations to?


[flagged]


If I am Euthyphro, then you are Socrates - no?

I thought Socrates was the troll.


What is this thing that you reading from and writing to?

It sounds mutable.


I’m sorry if immutability is a new concept to you, but you can read from one state, and write to a new state.

There you go. No mutation!


Writing state is the definition of mutation.

It's what all data storage systems do.

An immutable data store sounds pretty useless.

https://en.wikipedia.org/wiki/Persistence_%28computer_scienc...


It’s fine for you to think that. That’s not going to stop other people from finding these concepts useful though.

Once again, you are free to ignore the things you don’t understand. Your trolling is mostly harmless.


>you are free to ignore the things you don’t understand

Is that why you are ignoring the complexity behind persistence/mutability/state?

Even the Abstract Turing machine has ticker tape you know...


Interesting point. Lambda calc is equivalent, where the expression can be seen as equivalent to the ticker tape. So when a reduction occurs in lambda calculus, it's just like a group of symbols being interpreted and overwritten. But the thing is, no one has to overwrite the symbols to "reduce" them. It's just a space (memory) optimization. Usually to calculate something (you're fully correct by the way) we do overwrite symbols. The point of using immutable style is simply to make sure that references don't become overwritten. It sucks to have x+2=8 and x+2=-35.4 within the same scope, right? Especially when the code that overwrote x was in a separate file in an edge case condition you didn't consider.


Thanks for your input, Euthyphro!


My knack for spotting performative contradictions didn't sway you towards Deskartes?


Actually, according to quantum physics, the world is functional. The Universe can be described by completely by a function (the universal wave function) and is completely time-reversible; you can run it forwards or backwards because it has no side-effects.


The collapse of a wave function into a single state that apparently happens on observation is as far as we know non-deterministic. The wave function itself is a probability amplitude. So I think it's a misinterpretation to say that "the universe can be described completely by a function" when everything we can actually know about the universe is observable and has necessarily collapsed from a probability amplitude of possible states into the observed state.

My interpretation is the exact opposite. Newtonian physics suggests the fundamentally deterministic world that QM can't because in the latter the only thing that is deterministic is probabilities.


Collapse of a wave function occurs when one wave becomes entangled with another. If I shine a photon on a particle to measure it, and the particle emits a photon back to me, I am now entangled with that particle through mutual interaction. But to a distant observer, there is no wave collapse. If our universe is purely quantum and exists alone (not being bumped into by other universes), then its wave function would just be evolving unitarily according to it's Hermitian - totally deterministically (the wave function, that is).


I don't know what it means for the universe to be "purely quantum", or what it means to be a "distant observer" yet somehow not interacting. A wave function in itself is not observable. Is it real? Definitional. Is it a physical phenomenon? Dubious. All we know is that as a model it is consistent with what we observe. What actually can be measured (and can reasonably be considered real and physical in that sense) can only be predicted probabilistically using quantum mechanics. It's like saying that I can perfectly predetermine the outcome of a dice roll—it's between one and six.


By that standard every programming language is functional, since the compiled program is a function.


Since I got downvoted let me elaborate: it isn’t relevant that the universe is a function. What matters is whether the universe can be modeled better as the composition of pure functions or as some stateful composition (e.g. interacting stateful objects).

Programming languages aren’t about the final result but about how you decompose the result into modular abstractions.


your just stating claims without any evidence.


> I get answers about idempotence and list comprehension and strong typing which are great tactics but I never get the sense that they fit into an overarching strategy for how my life will be made easier by using Haskell.

First, I suppose those three things fit together in the sense that they make list comprehensions safe and fast.

But as you say, Haskell isn't the only language that gives you that.

I think the killer feature of haskell is that the community went through ridiculous contortions and pain to get rid of side effects.

Which means that Haskell programs exclude certain kinds of bugs - so it's not just that you can program in a functional style, but that you can be fairly certain there's no "shifting sands" under your functions.

On the other hand, I see many haskell programmers say that the powerful type checker forces them to think a bit differently, and perhaps harder, when writing - with the result that once the type signature fits, the function tends to "just work".

Anyway, I never did really enjoy Haskell - not as much as StandardML anyway. Sadly there doesn't seem to be any sml with good real-world libraries, so I've not really been using that either, outside of university :/

Perhaps looking at one of the few popular haskell utilities, and see if you feel Haskell is a good fit? I'm honestly not certain myself.

https://github.com/jgm/pandoc/blob/master/src/Text/Pandoc/Re...


Haskell's type system works best for me in allowing fearless changes, even significant ones to the core of a code base. The compiler won't let me overlook something.

A comprehensive test suite would achieve the same thing, but I'm rarely confident that my test suites are truly comprehensive.


If you're having trouble justifying the switch to Haskell and are primarily concerned with systems programming, I'd recommend try out a language like Rust. Many of the features present in Rust were inspired by Haskell, and there is quite a bit of overlap between the communities.

As far as Haskell itself goes I've written a little in the past in comments on here about why it's useful:

https://news.ycombinator.com/item?id=20111321

https://news.ycombinator.com/item?id=20260095

These days when I want to start a new project the decision on whether to use Haskell or Rust is the biggest consideration I make -- the expressiveness and safety and correctness Haskell can provide versus the raw speed, ease of build, memory safety rust provides.


Now I don't know Haskell particularly well but I have found another language I'm focusing on, that people basically have the same question about.

For me, the quest is to be able to build more with less effort, with less bugs and make things more maintainable for the future. For this, I've found Clojure (of the 10+ languages I've professionally worked in) to be the best one, but like all the rest of the languages, it doesn't fit every single use case. But most of them, so far at least.

I could go on and on giving you arguments for/against, but I think these goals are the same for most programmers who aim to learn/use some of the lesser known languages.


I think it has a lot to do with taste; you really need to like typing and abstracting everything; which I do in C# (no stringy types unless i'm in a blistering hurry) because I get paid to program C# because I actually find this very beneficial, but not limited to Haskell; Haskell taught me to do it to extremes though which, in my opinion, is what you are basically asking. That 'elitist' community in Haskell teaches a lot of abstractions, that, once practically understood, for me, give a great amount of power (which, for me again, means a far better understanding of large codebases and easy adding of features. For me that would answer the question you ask as a lot (most) of these are just very hard (not impossible which is why we see more and more of them, but not at the speed the Haskell community introduces them) to move to other languages.


Short answer: Because of the features you mentioned, You will be able to write code which is correct, self documenting, and achievable in fewer lines of code[.].

Your life won't be easier if you solve problems, because solving problems is hard. But you get a lot of benefits from using any language which has features similar to Haskell because of the above reasons. If you are into programming language research, compilers, theorem provers etc, Haskell comes with a lot of idioms, features and tools that it is a viable language for the tasks at hand.

[.] a lot of languages rightly claim it like lisps and modern lisp derivatives, but Haskell brings with it a top of the line type system, (imo) a clearer syntax and tooling than OCaml, and a decent support from the industry and the academia..


It’s self documenting if you consider a type signature to be documentation.


In addition to type signatures, the whole syntactical outlook resembles mathematical definitions.


This comment misses the point of the article, which argues against a urban legend that Haskell is too difficult "unless you are this tall" (a Phd, an E.T., a superhuman). I also had a shot at explaining why people are already tall enough in a past short "comic strip" [0]. I think the article makes a good job re-assuring the urban legends are a myth.

Now. there are many reasons why one would want to use Haskell. For instance, to have fun, to understand better some pattern, to write expanding-brain memes, or, because they are good at solving problems with it. It is fine if your reasons do not intersect with other people's reasons. You can try finding answers whether Haskell matches your reasons in some other essays [1,2,3]. Maybe it is true for many people that their needs and desires are entirely covered with their own toolset. As a curious/optimistic person I find it incredibly pedant to say I'll never ever need to learn/use something (there's different goodness in everything).

Personally, practicing Haskell led me to appreciate the importance and trade-offs that occur when isolating/interleaving side-effects. Similarly, it gave me some vocabulary to articulate my thoughts about properties of systems. Both are super important in the large when you architect softwares and systems. There are other ways to learn that (e.g., a collection of specialized languages), but at least for me, Haskell helped me build an intuition around these topics. That said, the prevalence of negative and derailing comments in discusions about Haskell can be demotivating (but our industry is like this ️).

[0] https://patrickmn.com/software/the-haskell-pyramid/ [1] https://www.snoyman.com/blog/2017/12/what-makes-haskell-uniq... [2] https://www.tweag.io/posts/2019-09-06-why-haskell-is-importa... [3] http://blog.vmchale.com/article/functional-haskell


"Idempotence, list comprehension and strong typing", efficient communication is done succinctly through terminology.

Idempotence eliminates the need for a whole bunch of retry and error checking code, which themselves are often highly stateful and susceptible to race conditions and bugs.

List comprehensions can make allow for certain guarantees about parallelizing which is becoming ever more relevant as processors scale with threads / cores rather than clock speed. All without explicit programmer intervention.

Strong typing means the compiler can check types at compile time rather than runtime - this (largly) eliminates entire classes of errors before they happen. If errors happen at compile time, the programmer fixes it, if errors happen at runtime the user gets annoyed.

These are just trivial examples of some of the benefits of these technologies - see how these strategies eliminate not single errors but entire classes of errors. Selecting the right technologies means less bugs, less headaches for users, less support calls, less firefighting, and THAT is how your life will be easier with Haskell, for example


Idempotence means spending days thinking about how to rewrite large parts of your application to get in "stateful" actions (for example, logging?) or how to make your list processing actually fast (hint, it doesn't work with lists). Idempotence makes it extremely hard to control when something actually happens or how much memory is used.

Strong typing means you'll spend your time waiting for the compiler to finish, or thinking how to unwrap this monad stack in a way that doesn't suck. It also means large parts of your app will have very unstable interfaces with way too many dependencies, you'll constantly be fighting with cabal or whatever, you'll often have to do busy work to keep track with external dependencies, you'll constantly be tempted to represent a different or additional subset of your invariants in the type system using the newest fancy language extension (that slows your compiles down even further, and may have subtle interactions with some of the other extensions you're already using).


> Strong typing means you'll spend your time waiting for the compiler to finish

You can see this is false by running ghc in type-check-only mode ("-fno-code"). Type checking is less than 10% of compile time. If Haskell takes a long time to compile it has nothing to do with types.

(and you mean "static typing")


I can't try out your suggestion right now because I've basically quit this ecosystem. But I would be careful about making any generalizations if the user can do significant computations at compile time.

> (and you mean "static typing")

Let's not split hairs. And I think "strong" is actually what I want to say (maybe better: "advanced" or "complex"). C also has "static typing" and I explicitly don't mean a simple type system like that.


> I would be careful about making any generalizations if the user can do significant computations at compile time.

Indeed, and with some language extensions type checking may never terminate! But library code only has to be compiled once and if you write code that takes a long time to type check that's on you. Bog standard type safe Haskell 2010 code type checks in the blink of an eye.

If your argument is "advanced type system features are too seductive for people to avoid" then I'd be more inclined to agree with you. But that's not what you said.

> > (and you mean "static typing")

> Let's not split hairs.

Well, Python has strong typing and doesn't take long to compile. Perhaps you meant "some advanced features of Haskell's type system". If so I'd agree with you. But don't discourage people from Haskell by making false statements about it and then accuse me of splitting hairs.


Ok, I appreciate the refinements, and I think we agree.


> Idempotence means spending days thinking about how to rewrite large parts of your application to get in "stateful" actions (for example, logging?) or how to make your list processing actually fast (hint, it doesn't work with lists).

Idempotence isn't a golden hammer, it's not suitable for every task. It is suitable for 1 way purchase code, increasing robustness in faulty networks, initializations. It's not suitable for logging.

It's not suitable for making list processing fast (that was some guarantees of list comprehensions, I think you got confused there)

> Idempotence makes it extremely hard to control when something actually happens or how much memory is used.

No it doesn't, that's totally wrong. Idempotence doesn't dictate execution time or memory used, it's not an implementation detail it's an abstraction level higher than that : it's a system strategy. This sounds like you're confusing idempotence with lazy evaluation.

> Strong typing means you'll spend your time waiting for the compiler to finish,

Compilers are really fast these days, only compiling the changes. "Waiting for compiler to finish" hasn't been a problem since the 90's, even on larger codebases (100,000+ LOC).

Also - So you cant be bothered to wait for the compiler to check your code, then the entity that's going to do it will be your users, in production. So maybe you can spend the time you saved not waiting for the compiler answering the support tickets coming in?

> thinking how to unwrap this monad stack in a way that doesn't suck

Subjective, no examples. This is just whining.

> It also means large parts of your app will have very unstable interfaces with way too many dependencies, you'll constantly be fighting with cabal or whatever, you'll often have to do busy work to keep track with external dependencies, you'll constantly be tempted to represent a different or additional subset of your invariants in the type system using the newest fancy language extension (that slows your compiles down even further, and may have subtle interactions with some of the other extensions you're already using)

Unstable interfaces with way too many dependencies is not a language problem it's a system design problem. It sounds like you have inexperienced software architects making poor decisions.


Wow, either this language has significantly transformed since I last worked in it 3 years ago, or you need a reality check.

Won't reply to everything you've said, but just one little thing that I remember vividly. I was trying to get this 300-400 lines basic OpenGL setup code to work. Each time I made a little change it took 10+ seconds to compile. I went with the minimal types needed to interface with this OpenGL library, which was nothing fancy by Haskell standards, but just a "straightforward" C wrapper.


GHC is fairly slow for a compiler, and Haskell has no separation of interface and implementation, so small changes can have big consequences.

Template Haskell is the worst for it, but I did have a code generator produce code that took about 25G of memory to compile recently, which was very, very slow on a 16Gb machine ;)


It's just a solid mature programming language.


There are plenty of those, though, and most others seem to make both getting started and doing practical things a lot easier.


Its actually very easy to get started with Haskell. Just install the Haskell platform and you have the environment.


I don't mean installing it, I mean learning the language as a beginner (already a programmer). I've had a couple of goes at it in years past, without success.


If you are into books, I recommend http://haskellbook.com/

After reading it, the language just clicked with me, although it is for complete beginners, it is a great read nonetheless.

Doing practical stuff like webdev, apps etc is a bit trickier, partly because Haskell is not the first language of choice for these kinds of projects, and thus lacks much needed documentation for starting up and best practices in those spheres.


From that article:

"I do believe that there is real value in pursuing functional programming, but it would be irresponsible to exhort everyone to abandon their C++ compilers and start coding in Lisp, Haskell, or, to be blunt, any other fringe language."

Even Carmack has a subliminal jab at languages on the basis of popularity from time to time lol


For me, the reason is rather simple- every single program I wrote in Haskell, once successfully compiled, ran correctly.


If someone from the 50s told you "no one has ever been able to explain to me why I should use stored-program computers instead of my cables and plugboards", what would your answer be?

Maybe your answer will be that that the stored program is an abstraction of the cables and plugboards, so you can be more productive.

Today, there are several additional layers of abstraction on top of that, functional programming is one of them.


Imagine you spent a career learning the piano. Would you expect to pick up the violin as if you were an expert?

Would it be reasonable to declare violin sucks! I've played piano for 30 years, vibrato is so hard it must be wrong, tell me why I should learn the violin?

Actually probably a lot will come with you, your dexterity and music theory will be quite useful, as will your ear.

But really, the instruments are very different and if you want to level up your sense of pitch or really learn how to sing a melody, the violin can be great for that...


Violin is definitely much harder to pick up than piano though, regardless of experience with other instruments. Obviously mastery is hard for both, but getting a basic pleasant melody out of a piano is far easier than doing the same with a violin.

Haskell is rather similar to a violin in that respect.


>Would you expect to pick up the violin as if you were an expert?

Rightly or wrongly, in programming we do expect this. We keep saying things like how languages are only tools, and how terms like "Java programmer" is as absurd as "Casio mathematician".

If you have effortlessly switched between imperative OO languages many times before, it's easy to think that any language you can't quickly learn must be because it's fundamentally and abnormally difficult.


Are you replying to the wrong link? The link is about programming fyi


I'm trying to draw analogy to maybe something less inflammatory.

Most of us here are programmers, a lot of us professionals.

Music is the output of musicians, as perhaps code is the output of programmers.

But music comes in all kinds of flavours: classical vs jazz vs pop

Performance vs composition

Recital vs improvisation

And then choice of instrument within any of those areas.

I feel a lot of people misplace their aversion to Haskell because they fail to recognize how different Haskell is.

It's a bit like when English speakers think Chinese is a "hard" language. It's not, linguistically it's actually a pretty simple language (whether the script is easy/hard is up for grabs...)

Actually Chinese is just different to English so you have few anchors and familiar friends to base things off so it feels hard cos you're starting afresh, like a child. And yet children of a china manage quite fine learning chinese....

Meanwhile, ask a Spanish speaker about their experiences learning Portuguese...


There is some utility in carving out a subset of a language to make the remaining part easy to comprehend and easy to contribute to. C++ is a large and complicated language but every team seems to be using a different subset of it. The same thing would happen with Haskell. Every one might have a different idea of where to draw the line for restricting advanced features. The author might decide that Haskell 98 plus OverloadedStrings is good enough. That's a valid stylistic choice. I would have drawn the line further; I personally think features to simulate dependent types aren't worth it (think singletons or DataKinds or TypeInType) but lens is absolutely necessary even though it is difficult for beginners to understand. But ask a colleague of mine, and you might receive a response that we should embrace complicated type-level programming like the kind seen in Servant, but instead restrict TemplateHaskell. All these are valid stylistic choices but they fragment the community. And eventually debugging other people's code bases (say dependencies) you would sooner or later have to face features you don't use.

Let us not forget the roots of Haskell as a research language where ideas in functional programming are to be tried out. When it emerged in the 1980s it was literally because a committee wanted a solid foundation to replace a disparate mélange of functional programming languages. It succeeded in that, and immediately introduced new features then considered highly novel (e.g. type classes). In this sense, Haskell will never be as practical and pragmatic as Go, where moderately modern language features aren't even in the language. Choosing a language like Go could be a valid choice, and so is choosing a language like Haskell.


Haskell is no doubt an interesting language. I have yet to work on a project where it was the obvious choice but I’ve played around with it enough to like it.

What I don’t like are some (not all) of the people who use Haskell and talk about it online. They can be incredibly obnoxious. Haskell is not a tool they work with, it’s their entrance to a class of software engineers that you’re not capable of being part of.

Just for the record: Anyone can learn Haskell. If you’re struggling with it that’s because of a tooling / literature issue. Haskell people (even the nice ones) are bad at explaining the language. The tooling isn’t very friendly and the language is a shit sandwich to debug.

The concepts therein are no more difficult to grasp than any other in programming. Is a Monad a box or a burrito? Please stop. It’s a language feature that allows you to do certain things more concisely than the alternative. Stop pretending it’s magic and show people how to use it and why they should. Stop drawing pictures.

No, algebraic data types are not hard to understand. You just make a big deal out of them because you think the word algebraic sounds cool.

I could keep going but I think I’ve made my point.

Dear Haskell people: Get over yourselves. Do not mistake the larger software community’s disinterest in your pet language for their inability to use it. That’s simply not the case. If people had to use it even ‘lowly js devs’ would master it.

Everyone else: If you haven’t used it already it is a very cool language and worth learning even if just for personal edification. Don’t let the self appointed high priests turn you off. Don’t expect their help either.


Sorry you've experienced this obnoxious behaviour. I've been in the community for years and never seen it. Maybe I'm not hanging out in the same places as you or I have some sort of unconscious bias that means I don't notice it.

Could you link me to an example? I have some measure of authority in the community and would like to help decrease the amount of obnoxiousness but I can't unless I know where it happens.

> Dear Haskell people: Get over yourselves. Do not mistake the larger software community’s disinterest in your pet language for their inability to use it. That’s simply not the case. If people had to use it even ‘lowly js devs’ would master it.

This is really interesting because the Haskell community I hang out with (mostly on Reddit) would love it if "lowly js devs" would master it. In fact we all too often get told by non-Haskellers that "Haskell will never succeed because it's too hard for most programmers to learn"!


Well, what about this article? It claims I just have to understand 4 basic concepts to replace java, python and Ruby.

Great, so using those simple properties let's reimplement Eclipse. Let me use tensor flow. Replace my Ruby on rail website.


I replaced Rails with PostgREST, which is written in Haskell but for me as a user is just a binary that I run in front of my database, so now I don’t have to write any backend code at all.

Good enough for you?


I can see how (certain interpretations of) it could be considered wrong. I'm failing to see condescension.


The condescension is the authors assumption that this is right, and a tiny fraction of haskell can replace all uses of other major programming languages.

This to me (as a non haskell user) implies other languages can't have anything interesting to offer, as they can be so trivially replaced. That is the type of attitude I see from haskell programmers.


I see. Interpretations of that particular passage range from all the way from trivially true (Turing completeness) to trivially false (I can't build anything that uses Ruby on Rails in Haskell). Certainly there are some intermediate interpretations, like the one you suggest, by which it is condescending.


FWIW, Haskell/Yesod has replaced all my Ruby on Rails websites.


But, can you do that with a tiny boring fragment of haskell, as described in the article?


Yes. All of the Haskell we write for our business is boring. More here: https://news.ycombinator.com/item?id=21145872


While I'm happy you are having success and enjoying Haskell, I feel in the context of this article (you can have success in Haskell with a small easy fragment of the language), the use of the "Lens" package means you can no longer claim to be writing easy simple Haskell, unless you are using it in some way I'm unfamiliar with :)


I think that the obnoxious smugness is felt by those outside the community, not by those within it. This could be because the community is nicer to those inside, or because those outside perceive smugness where those inside do not. (I make no claim as to whether that is the fault of those inside, or those outside.)


Haskellers do tend to be very identitarian. Not unlike vegans, or CrossFitters.


I remember Rob Pike unveiling Go at a Google talk years ago and some Googler got up and said "why would Go be better than Haskel?" and Rob said "Because I can't understand Haskel source code and I'm pretty sure you could understand Go source code right now."


I think this is still one of the strongest selling points for Go. It is a pretty solid language with a few basic concepts which are repeated constantly. Sure, it doesn't have generics, error handling is a mess and the language design isn't as consistent as it should be.

But what it does, is pushing everybody to write simple code which is easy to understand. So many times I found the documentation (of some library) to be incomplete, but jumping right into the code answered the questions I had. I can't say that about every language.


Totally agree. When I first encountered Go, I kinda hated it for its lugubrious combination of being rather "braindead" and opinionated about being braindead at the same time. Typing it every day gave me flashbacks of programming Java circa JDK 1.4 (before the compiler-only based generics were introduced in 1.5, which I still believe didn't solve enough problems for the complexity it added). In short, it felt like a step backward.

But after seeing its gc performance sitting at the single-digit ms timeframes with huge heaps (60 gigabytes!) I realized I needed to hold my nose.

What helped me make peace with it is its automated testing story. It's baked into the language in a way I've rarely seen in any other runtime.


Blog post that explains Functors, Applicatives, And Monads in pictures http://adit.io/posts/2013-04-17-functors,_applicatives,_and_...


This blog post may help some but readers shouldn't expect it to magically flick a switch that allows them to use or create monads.

Instead, I really like:

1. Stephen Diehl's “Eightfold Path to Monad Satori” from “What I Wish I Knew When Learning Haskell”:

http://dev.stephendiehl.com/hask/#monads

“Much ink has been spilled waxing lyrical about the supposed mystique of monads. Instead, I suggest a path to enlightenment:

1. Don't read the monad tutorials. 2. No really, don't read the monad tutorials. 3. Learn about Haskell types. 4. Learn what a typeclass is. 5. Read the Typeclassopedia. 6. Read the monad definitions. 7. Use monads in real code. 8. Don't write monad-analogy tutorials.

In other words, the only path to understanding monads is to read the fine source, fire up GHC, and write some code. Analogies and metaphors will not lead to understanding.”

2. Chris/kqr's “The ‘What Are Monads?’ Fallacy?”, which mirrors the advice from Stephen Diehl:

https://two-wrongs.com/the-what-are-monads-fallacy

“Instead, learn to use specific monads. Learn how Maybe a works, learn how Either e a works. Learn how IO a and [a] and r -> a works. Those are all monads. Learn to use them with the >>= operator and with do notation.

Once you've learned how to work with all of those, you'll have a really good idea of how monads can be used.

Asking "What is a monad?" to learn how to use monads is as wrong as asking "What is a musical instrument?" to learn how to play musical instruments. It's a good start, but it won't teach you very much.”


Eric Lippert's blog series on Monads[1] follows this approach and is very effective in teaching the concepts behind Monads. He starts out with standard concepts from C# that are Monads and shows how they're implemented and what they have in common. It is very approachable for an object-oriented programmer who's curious about functional programming.

1: https://ericlippert.com/2013/02/21/monads-part-one/


> Asking "What is a monad?" to learn how to use monads is as wrong as asking "What is a musical instrument?" to learn how to play musical instruments. It's a good start, but it won't teach you very much.”

Another good analogy might be learning circular breathing for ordinary everyday conversation.

You might hear the circular breathing is an amazing thing that allows you to do things you couldn't otherwise do. If you are trying to use that trick without ever learning to play an instrument then there will be a great air of mystery to it all.

Monads solve problems that are only made apparent by other design decisions in Haskell. In a non-functional, loosely typed context they are not so useful.


Yes, I didn't include the papers below in my parent response because, “just read these dense academic papers to understand what monads are and why Haskell has them” doesn't tend to be good advice for Haskell beginners. But learning why monads came to be included in Haskell is useful if you want to dive deeper into this stuff:

Philip Wadler's original paper on Comprehending Monads [PDF]: https://ncatlab.org/nlab/files/WadlerMonads.pdf

Philip Wadler and Simon Peyton-Jones' paper on Imperative Functional Programming [PDF]: https://www.microsoft.com/en-us/research/wp-content/uploads/...

Simon Peyton-Jones mentions both papers and talks about why monads came to be part of Haskell in his talk on the history of Haskell here, with examples that are a little more accessible than the above papers: https://www.youtube.com/watch?v=re96UgMk6GQ [introduction to purity and then monads starts around 30:07]

It's a funny talk and it's worth watching in full, but here's a nice soundbite:

“So what did we do for IO? Well, we just didn't have any. [audience laughs] So the joy of being an academic, right, is you can design a language — in 1990, I beg to inform you — that had no input/output. A Haskell program was simply a function from string to string. That was what it was. … But this was a bit embarrassing…”

He goes on to talk about other ideas they explored to create effects, why they settled on monads, and why he wishes now they had called them something like “workflows” (as F# later did[1]) to make them sound less intimidating.

(Simon and Philip will both be at Haskell Exchange 2019 in London this coming week if anyone else, like me, enjoys spending two days as the dumbest person in the room: https://skillsmatter.com/conferences/11741-haskell-exchange-... )

[1]: https://blogs.msdn.microsoft.com/doriancorompt/2012/05/25/7-...


> Monads solve problems that are only made apparent by other design decisions in Haskell. In a non-functional, loosely typed context they are not so useful.

This is key. I've tried to write a "Monads in JavaScript" article many times over the years, but it's pointless, because you have to preface it with 100 imaginary constraints.


idk, rxjs, observables, and promises seem pretty popular. Doesn't mean it makes sense to try to write all your js code in an fp style though.


Sure they're popular, but the reason they don't talk about monads, even if they are monads in an abstract sense, is that in those problem domains the abstraction doesn't bring any clarity to the design.


Whether they talk about it or not, monads did inform their design. rxjs literally has methods named `flatmap`


> Asking "What is a monad?" to learn how to use monads is as wrong as asking "What is a musical instrument?" to learn how to play musical instruments. It's a good start, but it won't teach you very much.”

This is such a great analogy!


Condescending tutorials seem to be an integrated part of Haskell culture. The rationale might be that people who don't already know Haskell should to talked to like kids. "Don't be scared, a monad is just like a Burrito!" and so on.

I can't recollect I have ever learned a programming concept through metaphors. The only way I have learned concepts is though solving tasks in a language and thereby learning to use the tools available.

I actually think Haskell is a very enjoyable language, but there is a culture around it which treats the type concepts (arrows, monads etc.) like goals in themselves rather than tools to achieve something useful.


The only condescension I see in Haskell-related discussion is condescension towards Haskellers, claiming that Haskellers are condescending and say condescending things like writing condescending tutorials.

As I see it it’s actually not very nice.

With the utmost respect, truly, humbly: I am against this custom that it’s just fine to call well-meaning people for condescending. As it appears to me, claiming condescendence is more of a statement of the claimant, but this is still an unfortunate custom. Who are we really to claim condescension? Really?

The article linked to in the post starts off with quotes from people who directly state that they feel like they would need to be smarter to use Haskell. It’s a common thing. The article addresses that. It is literally the opposite of condescension.

Then there’s the burrito tutorials with pictures. I for one actually think in weird abstract sloppy metaphors and colors. My head really is full of burritos and inaccuracies. I was helped enormously by the burrito concept. Textbooks generally don’t speak in burritos. I work with people to whom the burrito class of analogies is not helpful. They think in a more direct and crystalline way. I wish I were more like them. And I try try to be. And I can. And it makes me better. And I really do think I also bring something to the table by spraying flaming burrito concepts and lateral jumps into the team zeitgeist.


> I was helped enormously by the burrito concept.

OK fair enough, I can only speak for myself and the way I learn concepts. The way I learn a new language or platform is always to try to write a real program that does something. I am only able to lean concepts when I can see their usefulness. But I should recognize that other people learn in different ways.


I want to add that I actually do see how it comes across as condescending. After this exchange and thinking about what you wrote. idk, like, “Here, look at these kiddie pictures, those are suitable for you, and then I’ll withhold the real work ok?”


> My head really is full of burritos

Not such a bad life, if you ask me.


   The only condescension I see in Haskell-related discussion is condescension towards Haskellers, 
fwiw I've heard exactly the same claims by and about lispers.


And sometimes justified too. But in a different way. You don't see lispers ensuring us "You are smart enough to learn Lisp" because Lisp is quite easy to get started with. And you don't usually see lispers trying to be "friendly" by communicating in children's language.



I was wrong. And that book even looks interesting!



> Condescending tutorials seem to be an integrated part of Haskell culture.

Could you perhaps link to one? I know of plenty of misguided tutorials that try to be helpful but aren't. I've never come across one I'd call "condescending".

> I can't recollect I have ever learned a programming concept through metaphors. The only way I have learned concepts is though solving tasks in a language and thereby learning to use the tools available.

Yet for some reason newcomers (reputedly) want to know what a monad is before trying to use one! I agree with you: they should go ahead and use one and not worry about what it is!


It's not unique to Haskell, and I wouldn't say condescending. More like patronizing and a little infantile. 'Learn you a Haskell' is in the same tone.

Ultimately most of these are just imitations of the 'poignant guide to ruby' which is probably responsible for starting the trend of whimsical and zany programming guides.


You need to use the IO monad to write "hello world". So it is naturally the first question a newcomer will ask. But for example "Learn you a Haskell" http://learnyouahaskell.com/chapters postpones "hello world" to chapter 9, after explaining things like type classes. So the reader is expected to read and understand 9 chapters before writing the first program? This is ridiculous - I certainly wouldn't be able to learn anything that way.


Sure you need to use the IO monad to write "hello world" but you don't need to understand what it does in the background.

  #include <stdio.h>
  int main(int argc, char *argv[]) {
    printf("Hello World!\n"); 
    return 0;
  }

  class HelloWorld {
    static public void main(String args[]) {
      System.out.println("Hello World!");
    }
  }

  main :: IO ()
  main = print "hello world"
Hello worlds in C, Java and Haskell. I can write them and run them without needing to fully understand what's behind "*argv[]", "static public void" or "IO ()".


> You need to use the IO monad to write "hello world".

This isn't actually true. You need to use a value of the `IO ()` type. Your program would still work if `IO` was not an instance of `Monad`. In a similar way, you also need to use a value of type `String` but you don't care that it implements `Read` or `Ord` or `Semigroup`.

This doesn't entirely invalidate the rest of what you've written - you do need some basic understanding of how to combine IO actions to do any more interesting IO.

But I feel like your presentation is still misleading. "Learn You A Haskell" starts with the REPL, and has you running code in the first tutorial.


Let's be fair to Haskell and LyaH:

I've taught some Java classes. I can get people writing and running trivially useful code before explaining the concepts of "class" and "static". I explain that parts of the code will remain a mystery for a little while until I explain them, I promise to do so, and of course I fulfill that promise as soon as I can.

I feel neither Java nor Haskell are intrinsically bad just because there's no practical way to teach them completely on progressively layered concepts. Even (e.g.) Lisp has a risk of using special forms before understanding them.


Sure, but Learn You a Haskell seem to try to avoid saying "I will explain this mumbo jumbo later" by explaining everything up to type classes before getting to "hello world".

I guess I should write a Haskell tutorial which starts with "hello world" without trying to teach monads beforehand!


Skipping over syntax, I think it's something like:

To write any program in Haskell, you define `main`, the IO action that does the work of your program. `putStrLn` is a function that takes a String and returns an IO action that prints the string.

So we can write "hello world" simply:

    main = putStrLn "Hello, World"
I'm curious whether you actually think that's more useful than building understanding at the REPL.


> I'm curious whether you actually think that's more useful than building understanding at the REPL.

Again, I think this depends on you way of learning. For some people it might be possible to "build understanding" gradually over weeks until you are finally ready to write "hello world" having learnt all the fundamentals of the language. That just doesn't work for me. I need to learn through writing programs which actually does something.


But in the tutorial you are building programs that actually do something - plenty more than simply printing a fixed string. You're just doing it at the REPL instead of in a file.

You keep describing the alternative as if it's a bunch of theory before any practice. It's actually just a slightly different form of practice than maybe you are used to.

I am sympathetic to the notion that it might not work for you, but you haven't spoken at all to the actual distinction. If it really does make a big difference for you, I'm very interested in any unpacking you can do.


To clarify, I am not critical of the language Haskell itself. I think it is great. It is mostly a criticism of the tutorials and general culture around the language.

For example Rust is another language which have a reputation for being hard to learn, due to novel and complex concepts like the borrow checker. But in my experience you see none of this "talking down" to the audience. They just explain how the stuff works with practical examples.


I do understand what a REPL is and have even used one from time to time. But from my perspective, a program which cannot interact with the outside world is literally useless.

It is not an accident that almost any tutorial for any language or framework or platform starts with "hello world". Because you want to start with the minimal but real, working program - and build from there.


I seriously don't see how a stand-alone "hello world" program is meaningfully "interact[ing] with the outside world" in a way that printing a string at the REPL is not. Stop ranting and posturing, and instead please try to unpack that.

In either case you are simply printing a string to the terminal. The standalone program is easier to compose in your shell, which in some contexts matters a lot, but I don't see that it does here. Where is the difference?

Further, if we define "interact with the outside world" in a way that excludes the programmer reading things off the screen, then it's plainly wrong that all such programs are "literally useless". Calculators, for instance, have delivered a tremendous amount of value. I've personally run something at a REPL (in various languages) plenty of times because I had actual use for the value to be printed and didn't need to persist the program.


The point of "Hello world" is that it is the simplest possible real, working program. It can be compiled and executed and you could deploy it to users or to a production environment if you wanted.

"A complex system that works is invariably found to have evolved from a simple system that worked."

I understand that from a certain theoretical perspective it is just the same thing to echo a string literal in a REPL, but from a software development perspective it is completely different.

> The standalone program is easier to compose in your shell, which in some contexts matters a lot, but I don't see that it does here.

I kind of see where you are coming from. You are assuming the program is only ever used by yourself. I understand this is just a different culture and hadn't even thought about that perspective.


> So the reader is expected to read and understand 9 chapters before writing the first program?

Not at all. Haskell has a REPL, so anything involving output is defined as pure functions, then executed at the REPL. For example, here's Quicksort: http://learnyouahaskell.com/recursion#quick-sort That's from Chapter 5.

> I certainly wouldn't be able to learn anything that way.

Certainly not by only reading the table of contents, sure.


Again, this is about different ways of learning which ties into what is even the purpose of programming in the first place.

For academics, the purpose of a program is to exhibit some clever abstraction. So a quicksort which can't take any input to sort and can't produce any output is a perfectly fine program - almost the platonic ideal of a program.

But for developers, the purpose of programs is to do something useful. So learning how to read input and produce output is basically the first thing I want to learn when learning a new language or platform. Because then I can start building small toy programs, gradually doing more and more stuff and learning by tackling the challenges along the way. And implementing quicksort comes way down the line of things I need to learn to write a useful application.


LYAH is full of programs before chapter 9. It does this by living in ghci. Haskell is different - it makes sense for it to start somewhere else (somewhere more useful!) than hello world.


I'm sorry, I don't understand which part of my comment you were replying to.


You are suggesting that newcomers to Haskell should not care about what a monad is. I'm pointing out that tutorials like "Learn you a Haskell" are trying to explain monads before even getting to "Hello World".


> [...] trying to explain monads before even getting to "Hello World".

Chapter 9: Input and Output

Chapter 12: A Fistful of Monads

No, that doesn't seem to be the case to me.


Well then, as you already know I agree with you that this is absurd!

> Yet for some reason newcomers (reputedly) want to know what a monad is before trying to use one! I agree with you: they should go ahead and use one and not worry about what it is!


The one thing that made monads clearer for me was this thing https://youtu.be/b0EF0VTs9Dc Monads and Gonads

It's probably condescending for haskellers because it shows that monads are not some enshrined primary concept but just a kind of dumb wrapper you can write in any language and you do when it's useful for you.


Except that most commonly used languages dont let you have polymorphic typing so the whole point is lost.


>I actually think Haskell is a very enjoyable language, but there is a culture around it which treats the type concepts (arrows, monads etc.) like goals in themselves rather than tools to achieve something useful.

I get it though. The concepts are so advanced that you have to internalize the concept first before it can even be used.


Actually not. It's very hard to internalize the concepts before you've used them!


Yes, I get it, but for Haskell the concepts are so abstract you need to internalize the concept before you start using it.


Those weekly Monad blog tutorials probably annoyed the Haskell community more than anyone. Extremely readable and well written papers had already been published by the Haskell community (e.g. Phil Wadler). Read the papers first.



What confuses me with these tutorials is - who is the intended audience? A person who know what a functor is but doesn't know what a monad is? A person who doesn't know Haskell by do know what >>= means?


Honestly, this blog post is cute but it does not actually give you understanding and insight into these concepts. Imho the only way to internalise these are through practice. Write code, get annoyed by repetition/boilerplate/etc... and discover that abstraction Foo solves that problem.


This is one of two posts that made things click for me. This one clearly distinguishes Functors, Applicatives, and Monads visually so that I don't get hung up on the names every time I encounter them.

The other article that demonstrated the practical purpose or path to constructing Monads showed a number of them using nested procedural syntax formatted to look like do-notation. [I can't remember the title/link, if anyone knows please post.] It went through async/await, maybe or list and showed them rearranged with the 'monad'ic parts off to the right where semi-colons might be.

Do-notation just took that and flattened the nested structures. The appreciation that these different computational contexts could all be structured the same way while focusing on what's happening apart from these contexts really demonstrated why it's useful and the power of being able to encapsulate these seemingly different ideas.

I'm sure I'll need to find a part-3 in my journey (perhaps Monad transformers) and so on, but I've at least got a good footing from those.


The best intro to monad transformers that I've seen so far is https://mmhaskell.com/monads/transformers


If learning it is no harder than any other programming language that presumes that every other programming language is equally easy to learn. Furthermore it assumes that people have an equally easy job learning whatever language they learn.

Neither one of these conditions are remotely indicated by research on the subject, therefore the statement is false.

Also I did try learning Haskell at one point and I found it harder than some other languages, I would say I found it Erlang level hard which I tried at about the same time( about 2009) but I think Erlang has a little more inclusiveness to their community IMHO


> Neither one of these conditions are remotely indicated by research on the subject, therefore the statement is false.

That does not follow. "Research does not prove it" means "therefore we don't know it's true", not "therefore it's false".

To your wider point: I think (but cannot prove) that some peoples' brains find FP easier to learn and some peoples' brains find procedural easier. And I think (but can prove even less) that the large majority find procedural easier.


> Neither one of these conditions are remotely indicated by research on the subject, therefore the statement is false.

What is the standard unit of measurement for quantifying just how difficult a language is to learn? Is there a standard unit of measurement for quantifying the competence of a person learning a programming language?


I'm pretty sure the field is too young to have developed a standard unit of measurement of either of these things, if there were such units their creation would be one of the stunning intellectual achievements of mankind in recent history and would have such obvious benefits to many other parts of human endeavor that I would expect it would be a field more investment heavy than machine learning at the moment.

However it has been shown that there are people who seem more attuned to different learning styles, styles of programming, there are differences in difficulty between first language acquisition and later (dependent on language similarity, language domain), and many other studies regarding programming language learning that it can be said not everyone learns equally well every language, and not every language is equally as learnable.


> it can be said not everyone learns equally well every language, and not every language is equally as learnable.

I think that’s reasonable, but the question is then at what point does a language become sufficiently difficult that it no longer provides a good return on investment? And to what degree of proficiency must one achieve in a language in order to productively use it?

It doesn’t take a long time to learn all of Elm, and you can be productive in it quickly.

It takes a very long time to learn all of Haskell, but it does not take a very long time to be productive with it.


totally agreed, and I suppose in Paul Graham's concept of a Blub programming language there might be a language that is difficult to become proficient in but allows you to achieve things that would not be reasonably achievable in other languages.


> If learning it is no harder than any other programming language that presumes that every other programming language is equally easy to learn.

Did you click through to the article? The HN title is (currently) "Learning Haskell is no harder than learning any other programming language". The article's actual title is "You are already smart enough to write Haskell".

You appear to be responding to what a zealous mod wrote not what anyone actually believes.


I am responding to what the modded title is because it is wrong.

the article's actual title is wrong in other ways.

1. because someone might not be smart enough to write Haskell (is that different than program in Haskell?)

2. Because someone might be better suited to other language types than Haskell.

language types I prefer - small instruction sets, functional or declarative (that however is my preference) someone might just find object orientation a more natural way of thinking.


The company I work for defaults to F# when choosing tech for a specific project.

We defaulted to PHP and C# in the past.

We have quite some experience training people who just graduated and even people with backgrounds outside of tech.

Training someone from zero to autonomously writing production code is a lot easier in F# compared to PHP and C#.

We educate people in Elm and Haskell and switch over to F# when they’re ready to try building the first real thing end to end.


F# is really a very nice pragmatic language. C# keeps cannibalizing features from it, which is both good and bad...


As a die-hard .NET/C# developer, I would be curious what you perceive to be the 'bad' side of that coin. I assume something like LINQ would be classified as 'good'?


What he meant by bad is probably the fear that MS will abandon F# once C# has borrowed enough features from it.


The other unfortunate is that as C# accretes features, the old still hangs around. Eventually you end up with too many valid ways of doing things, and cruft that either can't be worked around or was never updated.

It's been over a decade since generics were introduced, but I still encounter code using ArrayList today...


Maybe I missed it, but what's the best way to learn basic Haskell? I was expecting a tutorial. Or link to one.


I really like the concise, well explained university course Brent Yorgey put together several years ago [1]. It also works well as a book club at work.

https://www.seas.upenn.edu/~cis194/fall16/


For me, Get Programming with Haskell by Will Kurt was much better than the 1300 page Haskell book. I loved it.

https://www.amazon.com/Get-Programming-Haskell-Will-Kurt/dp/...



This is the canonical resource, I believe

https://github.com/bitemyapp/learnhaskell



NO. Not that one. It explains the easy parts at (boring, pseudo-funny) length and expects you to understand the hard parts after a few sentences.


It's good at explaining those easy parts, though. It also encourages experimentation in the REPL. For a more complete understanding, I agree that it must be accompanied by some more reference-style material.



Followup on this; Is there an interactive online tutorial like codeacademy for Haskell?


There's Exercism.


I always wondered whether imperative languages feel more intuitive to most programmers because they are:

a) Inherently simpler and require no prior advanced knowledge of math.

b) They are the default in every curriculum and once your brain is wired that way everything else seems counter-intuitive.

I remember in the first grade of my elementary school we were taught basic algebra and how sets work (union, intersection etc). I understood both concepts but I couldn't relate sets to the outside world. Like why do I even need this. Maybe new programmers feel this way about functional languages and Haskell.


Try:

c) That's the way microprocessors actually work. Machine language is just imperative commands and branch instructions.


I am a skilled Scala developer with knowledge of FP sufficient enough to make two contributions to the Typelevel cats/kittens ecosystem. Yet I was not even able to start a hobby project in Haskell. I always start and fall back to Python shortly after. Is it just me?


That's actually interesting. Was it the base prelude typeclasses that are too different ? AFAIK scala uses monadic code too .. I expected not too much difficulties using haskell. Maybe that's the syntax ?


Scala has an awesome IDE support, familiar rich JVM ecosystem, familiar syntax, ability to ignore the fact that logging is a side effect...

I had a hard time understanding the Haskell ecosystem, plus the syntax is just way too different and restrictive.

For example the last tutorial I tried to follow recommend using nix package manager to install Haskell. I failled missereably, it just did not work out of the box.


I see, I had more luck using stack but I understand it was a matter of luck.

someone should make a haskell fiddle for newcomers

ps: I wouldn't think that syntax would matter that much, was Scala your first FP language ?


I'd argue that it's not as simple. But there's a relativity factor here.

1) Haskell is much like math. You look at some concepts and ideas and fail to see what it is about, until 20 years down the road you finally click that "maaan" discrete mathematics was already encompassing 90% of computation it just wrapped it in counting instead of concrete computer instructions. Mathematicians have this culture of going too far, too concise, too fast for most of the population.

2) Mainstream culture imposes a negative toll on this because when you're fed Java or PHP (let's say you learned in the 2000s when they were peak fad) you'll interpret Haskell through them, brain already set on a belief, and it will make it twice harder. And it's really hard to disentangle the weird bliss of being able to manipulate elements through an interface (here it would be the syntax its idioms) from "objective reality". I too was enamored by PHP associative array syntax (so fun compared to C or Java lack thereof) and F5 to see a website change before my eyes. Haskell feels like a punishment compared to this. Not even counting the social aspect of it .. wordpress made people have colleagues and money.

Anyway, keep learning haskell (and FP, and logic, and math).


The real question is is it worth the time & effort.


From my perspective, it isn't cut-and-dry. The quick response I give is:

If you are developing a business application and no one dies if something goes wrong for a few hours, the answer is "hell no". Productivity takes a hit for no strong upside to the biz.

If you are developing a safety-critical system, the answer is "maybe". This one is more obvious.

The more complex reasoning is that ultimately, almost every software system has to interface with the outside world in some way. Putting your functional programming layer as your principal interface between your internal and external domains is an extremely high-cost endeavor due to the complexity of handling things you can't predict. If your system is safety-critical, and you have extensive control over the external domain (e.g. dedicated, redundant sensor networks), you might be able to justify this added complexity because you can control many more variables than you would be able to otherwise.

An alternative approach that seems ideal to me at this time is to use imperative techniques as the principal architecture, and then use functional within that domain in the specific cases where it can be justified. Good examples of this would be C#/LINQ, or even invocation of F# from C# (e.g. rules engine). Imperative is extremely good at handling side-effects and managing exceptions. Functional is deterministic if you can keep it on rails. Using both where most suitable w/ interop between seems to be the most productive approach.

A quick corollary could be: "If you are exclusively using functional techniques to build your application, you are probably making a mistake".


I'm writing most of my apps in Elixir which is also functional. I got no problems wiring side effects with it despite it being less imperative than some C#/Java/friends. In fact, I'd write high availability apps on Elixir any day over C# or other OOP/imperative lang. I don't ever want to touch any safety-critical systems though, but I do care about correctness of the business logic. Somehow I have zero interest in Microsoft langs/runtimes in general.

That said, Haskell just feels like a different beast. I also don't know how interop would actually work with these different solutions, would feel like unnecessary complexity if anything. Only to end up losing the benefits of Haskell's strong typing.

I've been struggling to justify spending more time & effort on concepts mostly exclusive to Haskell, only to be able to be productive on it. I keep thinking I should brush up my Elixir skills instead.


> as your principal interface between your internal and external domains is an extremely high-cost endeavor due to the complexity of handling things you can't predict.

I fail to see why if you believe this is true of haskell, it would not also be true of C#


The funny thing is that software engineering tools exist to relieve programmers of mental overload. This in turn can be used to build more complicated systems, at the cost of another level of mental overload.

So this begs the question: where are the complicated systems written in Haskell that could not have been realized in other languages? Or are Haskell programmers only using the language to sleep better at night?


> Or are Haskell programmers only using the language to sleep better at night?

For me, the balance point lies definitely closer to the latter. I'm not writing hugely more complicated programs in Haskell than I did in Python but I feel much better knowing they're not likely to fall into pieces next time I touch them.


I guess it ultimately boils down to productivity. I mean that's what high level programming languages are built for to begin with right?


I write Scala and Java for a living and it absolutely is :)


i've learned rust in a month of coding sparsely. in fact, i'm as comfortable today at writing rust as i am at writing go. i've been struggling to learn haskell for over half a year. i think it's an amazing language, don't get me wrong. but the fact that haskell is not harder to learn than any other programming languages is false. after taking a few online course, tutorials and so on, i still believe it's one of the most interesting (if not the most interesting) programming language in existence and also the hardest.


Did you click through to the article? The HN title is (currently) "Learning Haskell is no harder than learning any other programming language". The article's actual title is "You are already smart enough to write Haskell".

You appear to be responding to what a zealous mod wrote not what anyone actually believes.

(See also https://news.ycombinator.com/item?id=21171360)


>What do you need to write real Haskell? The core of the language is extremely simple. If you understand how to write and use

>pure functions and values >pattern matching >sum and product types >how to glue together IO to make side effects

You can do pure functions, pattern matching, and sum and product languages in plenty of other languages that have much more user-friendly syntax, community, and documentation. Also doesn't hurt that they have impure IO so you don't have to jury-rig impure IO using a monad.


I don't think what you say contradicts anything in the article ...


It does. Haskell is harder to learn than other languages because it has a difficult syntax, navel-gazing community, and complex documentation.


You're right! Turns out it's me that hasn't read the article. Anyway, remove the sentence about it not being harder to learn than any other language (which I think is unfortunate) and then I don't think what you say contradicts it. Haskell can be very hard to learn yet still within reach of most programmers.


I took a Haskell class couple of years back (Functional Programming) as reference I worked for 12 years before as Software Engineer using Pascal, C, C++, Java, Perl, Shell, Python, Visual Basic, and I felt like I was learning Chinese, I really put extra effort, but Monads, Applicatives, etc were complicated and way above my head. I really never found a reason to continue that punishment, I admire the teacher and the community who work on it, maybe not for me. But not easy


Haskell basics are easy to learn. Nobody learns Haskell to linger on the basics. Haskell is a useful tool for express a deep ideas which aren't basic and require time, effort, focus and guidance to learn properly. You can still do quite a bit with the basics. I solved many Euler problems using Haskell and prefer it over Python.


I mostly agree with the article, but not with the HN title (which doesn't match the article's title). The article itself has a nice statement:

> If you’ve already learned another language, you can learn Haskell. And even if you haven’t, learning Haskell is no harder than learning any other programming language.

But most of the programmers are familiar with imperative languages, and can quite easily switch between them (possibly writing non-idiomatic and awkward code, but without having to really learn the language from scratch to do so). While a purely functional language with unfamiliar syntax doesn't let one to do it as easily. So indeed, if you haven't learned other languages, probably it's not harder, and if you have, you still can learn it, but likely it would be harder to learn than other imperative languages if you are already familiar with a few imperative languages.


> I mostly agree with the article, but not with the HN title (which doesn't match the article's title).

I think the HN title used to be correct. Perhaps the mods changed it. I can't imagine why. It's a completely different claim from the article!


As a Haskell n00b, I'd like to mention something I can do in about 20 other languages but not in Haskell:

( UPDATE: I stand corrected, in a reply by tome. )

I'm writing and debugging some functional code - in Clojure. There's a function I've smoke-tested on its own but that's failing me with "real" data. Lacking a decent debugger for Clojure and being too lazy to isolate my problem into a test setup, my tool of choice is the lowly "debug print."

In Clojure, the body of a function is essentially imperative, and I can insert a `(println "label:" value)`. To do the same thing in Haskell, I'd have to restructure my whole damn program.

I understand the rationale for purity in Haskell, but sometimes I see it as badly standing in my way of accomplishing what I want to do.


Have you heard of Debug.Trace? It gives you exactly what you are looking for.

https://hackage.haskell.org/package/base-4.12.0.0/docs/Debug...


Wow, cool! Thanks. I hope to do more with Haskell once I retire, and this may stand me in good stead.

I think my point is still supported to some extent by the fact that this is knowledge from outside of the language that I'd need to know to accomplish. It's something I need someone helpful like yourself to tell me about. Also something that only works because this library deliberately breaks Haskell's rules.


> Wow, cool! Thanks. I hope to do more with Haskell once I retire, and this may stand me in good stead.

I'm glad it was helpful.

> It's something I need someone helpful like yourself to tell me about.

In general we're happy to help but it does annoy us when people make assumptions about what it's like to program in Haskell without actually trying it. If you'd like help I can suggest Haskell Reddit https://www.reddit.com/r/haskell/ or emailing me personally http://web.jaguarpaw.co.uk/~tom/contact

> something that only works because this library deliberately breaks Haskell's rules.

Only if you have very punitive assumptions about what Haskell's rules are, like those who have never used the language often do. People who actually write Haskell programs have other ideas.


When I tried to learn Haskell, I tried doing the kind of stuff I usually do in Ruby: parsing log files, converting text formats, etc. It seemed hopeless. I'm curious to hear about the experience of someone who uses Haskell for everyday tasks. Does anyone do that?


There are languages that attract people who want to get things done (mostly multi paradigm languages these days) and there are languages that attract people who want to feel special (languages oriented on one true way or obsessed with a particular purity).


Yeah, ok, let's say it's true that it's no more difficult to learn Haskell as opposed to any other... with a mindshare of 0.29% ( according to something like this: http://pypl.github.io/PYPL.html ) isn't that like saying that it's no more difficult to learn "Upper Egyptian Arabic" ( see: http://pypl.github.io/PYPL.html ) rather than something a little more esoteric? ... i.e. WHY would you want to learn it unless you have a very specific use-case, i.e. moving to Egypt

(edited for a word)


Did you click through to the article? The HN title is (currently) "Learning Haskell is no harder than learning any other programming language". The article's actual title is "You are already smart enough to write Haskell".

You appear to be responding to what a zealous mod wrote not what anyone actually believes.

(See also https://news.ycombinator.com/item?id=21171487)


  My point was/is that even if it IS easy to learn, relatively speaking, WHY would you want to learn it when it's obviously a niche language... I mean a REALLY small niche at that...


Mirroring some of the comments here, just because you don't understand the internals does not mean you can't pick it up and be productive with it. This is what you do with any other language, but it just seems more natural to you (even though they are really just as complicated).

Hence I feel like one of the big issues is that people learn to program in a non declarative/imperative way. When you first pick up python / c / c++ etc, there are plenty of details you don't necessarily understand (who here has absorbed all of the c++ spec?).

At most Haskell is just different and I don't think it requires more mental effort than anything else.


Everyone is smart enough to write Haskell. Almost no one is smart enough to read Haskell.

There is a reason loops are easier to read and maintain compared to a fold(map(filter(zip(...)))).


That's your opinion. And one that's not terribly well-informed. I find complicated loops difficult to read. Multiple things (besides the loop induction variable) are being mutated. I had to manually pattern-match whatever is in the loop back into well-known operations like fold and map and filter. And I can easily cut and paste portions of that expression into a REPL to see what it's doing.

Long before Haskell became as mainstream as it is now, C++ people have discovered the benefits of STL algorithms over raw loops.


To give you a counter example: when i write code in an imperative language i always first write the logic out as fold/map/filter and then translate that into an imperative loop.


That's not a counter example. I stated map/fold/filter is difficult to read. You said they are easier to write.

These are not opposite, and I happen to agree with you.


Fair comment. I do find reading it much easier as well, which I guess boils down to familiarity, but when reading i don’t need to translate it hence my comment re writing.


> There is a reason loops are easier to read and maintain compared to a fold(map(filter(zip(...)))).

Absolutely not, and I read and maintain loops in both Python and Haskell!


Even in Python I find the vaguely more functional style of list comprehensions easier to read than explicit loops. I've taught enough classes to know that many students who've had only a CS-101 type class do prefer explicit for loops that accumulate a result rather than list comprehensions, though. Seems like any statements about clarity (in either direction) have to include a fairly big caveat about who the audience is.


Nice that you mention Python. The creater of Python echoes my points: http://lambda-the-ultimate.org/node/587


Oh, I know. I use Python more by necessity than choice and it's only made bearable by features that Guido doesn't want to be there!

If only it had tail call optimisation, multi-line lambdas and other features that Guido doesn't want. Then it might be even more than bearable.


I dunno. I like that style, makes more sense to me.


Okay - so what is that reason?


well, it'd be more like

(fold op . map f . filter p . zip) l1 l2

Which I think is very understandable as long as you're familiar with what all of them do. Which you become, very quickly.


Learning Chinese is no harder than learning any other language. Unless there's a language you already know that is unlike Chinese and more like other languages.


Learning Chinese is objectively more difficult than learning alphabetic languages. Although learning Chinese really would be more like learning APL.


That's written Chinese. Learning spoken Chinese is more difficult if you're not used to a tonal language, though.


Can anyone recommend an intro to Haskell for me? ~4 years of Python experience and a math background. Ideally something online/interactive ...


The canonical source for Haskell introductions is the following, I believe

https://github.com/bitemyapp/learnhaskell


Maybe things have changed since 2005 when I first used it but Haskell was certainly harder for me and the people I knew to learn than other languages. For languages of a similar power I can see Haskell being similar difficulty to learn. But compared with something like Python the amount of effort needed to go from zero knowledge to being able to write some sort of program that produces some useful real world results is much higher in Haskell.

I say this as someone who quite likes Haskell, when you have a problem that fits well with Haskell it's really amazing.


Things have changed significantly in the last four years, so they definitely will have changed a huge amount in the last 14 years.


I've struggled with Haskell for a long time. Surprisingly, learning Clojure made many Haskell concepts clear and more approachable for me.


I would expect to get paid more though and that is not happening (except niche places). Same story with F# and Scala. Not worth it.


Scala software engineers are some of the best paid out there.


What are the examples? There is no requirement for specific domain experience?


Well, I was hired into dev at a quant finance shop (you may draw your own inferences about pay), using F#, with no experience of quant finance.


Niche. Wouldn't be surprised if it's G-Research or Jane Street or one of those very few places.


Could you give an example of something that's not niche? If quant finance is niche, it's quite a big niche.


I don't know whether this is true or not, but either way, working with Haskell makes my job more enjoyable than if I was writing some other language anyway.


It's not the language that's frustrating to learn, it's the tooling.


Honestly I think Haskellers should spend less time writing blogs and more time writing code that does something interesting/useful. When the language you claim being so superior comes with such bad tooling and usability your claim instantly looses any substance.


Strange. I spend the vast majority of my time writing code that does something interesting/useful. In fact, it puts food on the table and a roof over the heads of several families.

The reason why I’ve given up my own time to write some blog posts (apart from it just being an occasional pastime), is that everyone complains that Haskell doesn’t have enough documentation or tutorials.

Damned if we do; damned if we don’t.


* "Haskellers write to much code but don't explain the language. They should write tutorials."

* "Haskellers just write tutorials but there's no good practical code."

* "Haskell will never catch on because it's too hard. They think it's easy but that's just because they're geniuses."

* "Haskellers are obnoxious and condescending. The language is easy but they want to pretend it makes them special and clever."

* "The Haskell type checker only checks really simple properties that are easy to prove by hand"

* "Haskell doesn't come with tooling that automate refactorings that are easy to make by hand"

The list of contradictory complaints goes on ...


We absolutely need more hands-on Haskell tutorials and blogs. I've been massively more productive with Haskell with the help of just a few well written blogs that are full of tutorial posts.

We got a lot of books detailing various type theories, algorithmic nuances and so on in Haskell, but very little resources to actually help anyone get off the ground with practical web development.


> I've been massively more productive with Haskell with the help of just a few well written blogs that are full of tutorial posts.

Could you share some examples? It's probably a good idea to promote useful content!


The best resource for me has been Matt Parsons' blog: https://www.parsonsmatt.org/


haskell is easy to learn if you want to ignore a bunch of floofy high level math concepts and just paste how to parse a json library.

lets be real, software engineering that brings results is i/o. i see the appeal in pure functions and guarding that out but the shit that matters is going to be the nasty mess of stuff that saves business logic that was hacked together over 10 years of rnd, interns and crunches


Algebaric data types and monadic types are perfect for business logic.

Read the book domain modelling made functional.


This too, eh? "Domain Modeling Made Functional" - Scott Wlaschin

https://www.youtube.com/watch?v=Up7LcbGZFuo


I'm not sure the idea of a "beginner" language should be a design goal, and I don't think Haskell being a hard langauge to learn means it's a bad langauge.

Easy as a design goal has led to many languages like AppleScript that try to ape natural language, or to visual languages, and those consistently seem to be bad choices.

Haskell is probably not as hard as many people who haven't made the jump think it is. But it is hard.

> What do you need to write real Haskell? The core of the language is extremely simple. If you understand how to write and use [four items broken out below] then you already have everything you need to write useful programs that solve real-world problems.

The core is admirably simple, but simple doesn't mean easy. Easy to learn is when you have intuition and existing skill that readily map to the core constructs used in a language.

> pattern matching

There are cases where pattern matching is obvious, but this is not how most people have learned to structure a problem.

People aren't naturally good at breaking problems into parts, that's a skill you have to develop over two or three years, which is my guess from observing myself and peers.

> sum and product types

Explicit types are already hard; you're already excluding many people who have trouble making the leap from Javascript to Java.

But Haskell's type system is heavily generic. And its support for container types is not so great. Making them easy was a major reason Perl, Python and Ruby grew quickly, and landed us a ton of horrible code.

To understand how Haskell makes these container types hard, compare the API for Data.Map to the API for java.util.Map[2]. You can ignore Java's rather pathetic facilities for FP entirely because you don't need them.

The Haskell version has functions to do lookup, union, intersection and various predicates. In Java, they're trivial to implement with a loop.

> pure functions and values

The intuitive way to change a thing is... well, to change a thing. I can express it directly in plain language. To express the pure way, I have to describe it as "construct a new value such that all properties are the same except for the changed properties."

I've seen a meme that shows Common Core multiplication in a pedagogical context, and contrasts it with the traditional long-form multiplication. It completely misses what CC is trying to do, but it's useful in showing what people are comfortable with: they grok procedures much better than abstractions.

Functions are the part of math where most people who were okay with algebra nevertheless started tuning out. First-class functions are weirder still. Haskell also mandates currying, and being strongly typed every function gets a confusing type signature.

All that said, keeping track of state changed by mutation is deceptively hard. It's a case where your intuition routinely leads you to build things you can't maintain. But sticking strictly to the claim in the OP, it's easy to learn to mutate state but hard to do.

> how to glue together IO to make side effects

This shows that monads are core to the language, and they have all the difficulties of generic types as well.

Everyone has jumped on how hard monads are already so I won't belabor the point, just to say that except for concurrency / parallelism, I don't think there are any core language features as hard to understand as monads.

(And I'm not sure exactly what the "core language" is, but I think anything in the Prelude is fair game.)

Haskell does make them dramatically easier with "do" notation, but I think you really do need to grok monads for your learning to progress beyond simple examples.

[1]: https://hackage.haskell.org/package/containers-0.6.2.1/docs/... [2]: https://docs.oracle.com/en/java/javase/12/docs/api/java.base...


> To understand how Haskell makes these container types hard, compare the API for Data.Map to the API for java.util.Map[2]. You can ignore Java's rather pathetic facilities for FP entirely because you don't need them.

> The Haskell version has functions to do lookup, union, intersection and various predicates. In Java, they're trivial to implement with a loop.

I don't get it. I don't see how you do lookup with a loop at all. As for union and intersection, are you sure you write them in Java with a loop? The way that's obvious to me would be quadratic. Is there a better way?


> All that said, keeping track of state changed by mutation is deceptively hard. It's a case where your intuition routinely leads you to build things you can't maintain. But sticking strictly to the claim in the OP, it's easy to learn to mutate state but hard to do.

Hard to do... safely. Easy to do wrong, though. But wrong might still work most of the time. [Edit: "Works most of the time" is not praise, nor endorsement of the approach.]

To me, this is the fundamental thing that the FP people have right. Constrain your state space or die.


...but have no reason to.


Not la. Clojure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: