I struggle with this. After years of studying OO and design patterns in Ruby and JavaScript I was having troubling building complex asynchronous systems and stumbled into Go. I saw the value of types for managing that complexity and the runtime for supporting asynchronous primitives, but internally was very limited by the lack of generics for things like collections and higher order control flow for things like error handling.
Eventually I arrived at Haskell after strongly considering Clojure, and I'm very happy about what I've learned and my new way of approaching programming complexity. Unfortunately there is nowhere else to go from here since I can't find employment as a Haskell programmer. OCaml, F#, Scala, Elixir, and Elm all feel like a step back. Now I'm a Java programmer and quite miserable. I feel hampered by the language nearly everyday in terms of how easily I can express my thoughts in code. Haskell isn't perfect but is the best fit for my mathematical mindset.
I am trying to lead by example. I help host my city's Haskell meetup and contribute to a Haskell reading group. The path is lonely, my co-workers poke fun at me, and don't care to put in any effort to understand what I have to say. I love teaching and explaining things but there is zero interest because the machine keeps moving. Not many at the office even enjoy Java but are resigned to do it for our large pool of enterprise clients. All in all each work day is a void I put 8 hours into, which is fine, compared to most working conditions; it causes no suffering beyond the lacuna.
I'm past the stage of trying to convince other programmers anything. I recognize many are happy with their tools. I yearn for that happiness and don't seek to spread my misery. I offer my time to those who are interested and want to learn more.
Anyway, this article nails it, Haskell has advantages but they aren't enough to change things without the infrastructure the author is building. I look forward to being an early adapter of Unision and continue to remain hopeful for the future despite the long odds.
I think for many folks Haskell is too big of a gulp all at once. It's not that it's too hard (I think that's a bit of a myth). It's that it's just too different. It's hard for an experienced Java programmer to go from being highly productive to seriously struggling to even solve FizzBuzz in Haskell, let alone dealing with lazy IO and various terms and concepts that don't show up anywhere else like monad transformer stacks.
In my experience, this is where languages like Elm come in and are extremely valuable. This is also why I think Elm is much more important than PureScript. I've recently had a lot of success advocating for and using Elm for an internal tool at work. We've now got several thousand lines of pure Elm code, and for the most part everybody has been incredibly impressed with the overall Elm development experience. I'll also say that the tool ended up being a lot more powerful and feature-rich than originally planned because Elm made it so easy to keep expanding and improving the application.
Java -> Elm -> Haskell/PureScript is a much more enjoyable path for many than Java -> Haskell. It also feels a lot more motivated. Use Elm for a while, and you see many of the strengths of Haskell, but you also see many of the weaknesses of Elm. Once you've seen those weaknesses, you'll be happy to find that Haskell solves most of them. Now you have a concrete reason to look at Haskell, and that can make a big difference.
I tried Elm with much enthusiasm, but ended up putting it aside it for now (0.17) after being bored by JSON decoding (of big, deeply nested API response objects). Could there be a way to more succinctly (declaratively?) express my data types and keep Elm's awesome typing/compiler guarantees, but avoiding the painful construction of repetitive, nested, boring decoders?
noredink's json-decode-pipeline [1] seems better than what Elm offers out of the box, but it still feels like a lot of ceremony for something mundane. People building webapps [2] to help users generate boilerplate code seems like a hint something's wrong, isn't it?
If you (or other passersby) have experience in how JSON decoding is achieved in Haskell (or other statically-typed functional languages), I'd be curious to read some sort of comparison with Elm.
A quick Googling suggests Data.Aeson [1] is the popular library in Haskell land. Does it rank better on what I was hoping for (succinctness, ease of use)? If yes, would it be reachable within Elm, or are some Haskell constructs missing in Elm to achieve a similar experience?
Because of this, it's not necessary in that example to even write the JSON decoder manually. The compiler can generate it on its own. I don't personally know of any way to achieve this in Elm, as Elm doesn't have typeclasses and what's effectively happening here is that an instance of the FromJSON typeclass is being computed by the compiler.
This is the case for me. I simply don't have the time to invest in a technology that I keep hearing great things about, but from which I don't see many interesting things being built. I don't mean to knock on Haskell; I'd like to learn it, I just can't justify the time. I'm hoping to get there eventually, perhaps by pursuing Rust more or checking out Reason/OCaml, but I don't have many applications that could benefit from functional languages (at least not functional languages that are markedly slower than the languages I'm building things with today).
My advise : baby steps. Yes Elixir or Ocaml or Rust or Elm would be a "step back" (i would disagree for the erlang ecosystem. No type system but the paradigm is as solid. Just widely different. There is a reason SPJ or John Hughes are close to the erlang community).
But it would also be a "step up". You would feel better than with Java and you would have an easier adoption than Haskell. And in a couple years, you would add a Haskell dependency to them. And then a bit more. And in a decade you will have help clean the mess we are in a bit.
Have a goal. Try to find a path to reach it. Look at how to improve things :)
I know for a fact that both the Elm and Elixir and Rust community would be really happy to help both the transition and evolving the tooling if needed. Come talk to us. Try to make your life a bit better :)
(I do not know well the OCaml community, but i bet you would also find helpful people there)
- How does Haskell fare with large projects that use APIs that are inherently stateful, like OpenGL? Don't things get messy and ugly as the pure world of Haskell is being tainted?
- How do I optimise Haskell code without having studied the language for decades? I had a lecture where we were taught Haskell, so I know the language, but darn, even a simple hash map seems to be so very complex in the language. The fact that everything is a linked list (bad for caching & performance) and everything gets copied around really turns me off.
> Don't things get messy and ugly as the pure world of Haskell is being tainted?
Haskell is excellent at handling state. "Pure" doesn't mean no state, it means all state is handled explicitly in a type-safe way.
> How do I optimise Haskell code without having studied the language for decades?
It requires learning some new things, but honestly when I worked at a company that used Haskell I didn't really have to worry about this much. You occasionally make some things strict and there are some good heuristics about when this is needed, but overall it just didn't come up often.
> I had a lecture where we were taught Haskell, so I know the language
No you don't. You can learn Python in a lecture if you know Ruby; you can't learn Haskell in a lecture unless you already knew SML or OCaml, and even then probably not.
> The fact that everything is a linked list (bad for caching & performance) and everything gets copied around really turns me off.
Everything isn't a linked list in production code. It's easy to swap lists out for vectors wherever it's correct to do so, because you can write almost all of your code generalized to the necessary type class rather than specific to one single container type. Real Haskell code performs well, poorly-performing code is just used for examples when teaching beginners because it's easier and introduces fewer things at once.
> you can't learn Haskell in a lecture unless you already knew SML or OCaml, and even then probably not.
I just want to emphasize this part because it's really quite true, as someone who came to it from F#/OCaml.
The thing with learning "haskell", is that there's faar more to it than learning the basic language constructs. You can learn haskell the language in about a day, but you can't really learn haskell the paradigm/philosophy/mathematical discipline in a day, much less actually program haskell that quickly.
That being said, it's not as difficult as it's made out to be, the mountain you need to climb is much shorter than people realize. The issue is largely one of jargon, and getting used to using and thinking in all the new terms and concepts that really have few equivalents in other languages.
> The thing with learning "haskell", is that there's faar more to it than learning the basic language constructs
That would be a compelling reason as to why it's not popular. This basically means your programming experience is not transferable, leaving it as a weird side language that some people stumble into due to uncommon factors.
Oh it almost certainly is the main reason why it isn't popular. Then again, that is also the same reason why abstract mathematics in general isn't popular.
Saying the experience is "not transferable" is looking at it the wrong way. Learning the discipline behind haskell isn't even remotely the same kind of beast as learning how to use a niche legacy framework in COBOL for example.
It's a lot more akin to learning to read human language for the first time, because all it's doing is familiarizing you with patterns that exist in code and computation, independent of Haskell. The actual language part of the haskell equation is literally the least relevant/significant, because what it teaches you is to recognize mathematical/computational patterns that you can find in any language, despite the difficulty of expressing some of those concepts succinctly in some languages (e.g. java).
The biggest thing that's 'not transferable' is the ability to communicate the high level patterns to others who haven't gained literacy with that particular branch of mathematics/CS yet. But that's true of all sciences and mathematics. You can call it a "weird side language", but that's kinda doing it a disservice as a language that's intended to express complex relationships and computational patterns in the most general way possible.
> Don't things get messy and ugly as the pure world of Haskell is being tainted?
The word "pure" gets thrown around with regards to haskell, but most of its draw and apparent purity, isn't the same as the technical definition of it being a "pure" functional functional language.
This "pure" functional aspect is more of a quirk that you get used to working with, rather than a noticeable feature.
I would argue that the seeming 'purity' most haskellers love about the language, has to do with it's ability to cleanly and flexibly model domains without fear of refactoring, or worrying about fitting them to strict opinionated frameworks. This is because the language is flexible enough to let you really model your thoughts however you want, regardless of what they'll be plugging into. Meaning that you can focus 'purely' on the business logic.
So no, the 'purity' shouldn't be affected very much in large projects.
> The fact that everything is a linked list (bad for caching & performance) and everything gets copied around really turns me off.
It sounds more like you're inadvertently looking to shoehorn your existing mental model of how "performant code should work" into Haskell, rather than looking into how "performance with haskell" is achieved. Haskell works on a fairly different paradigm, so you can't just make assumptions about it like that. For example, hashmaps are largely unnecessary in Haskell, and you find out why the more you use it. The linked lists in haskell are also not much of a concern, because they're already highly optimized, due to the fact that they have to handle being infinitely large, not to mention that there are other data structure readily available if needed. Overall, haskell is already quite performant,and you can turn it's laziness features off wherever you want if you really have a hard time reasoning about is lazy evaluation performance anyway.
> For example, hashmaps are largely unnecessary in Haskell, and you find out why the more you use it.
I use Haskell quite a bit and I don't know what you would use instead of a strict hashmap for something like O(1) lookups in an application to correct spelling. Do you?
How does Haskell fare with large projects that use APIs that are inherently stateful, like OpenGL? Don't things get messy and ugly as the pure world of Haskell is being tainted?
The pure world of Haskell does not get tainted. Monads help manage the inherent messiness better. As for your question, perhaps this glut example I quickly found and skimmed will help:
- How do I optimise Haskell code without having studied the language for decades?
Optimizing is where I've been having trouble with lately. You need to understand laziness, inlining, and reading basic core imo.
For understanding laziness, read a few tutorials. Then use the ghci debugger as described in the ghc manual. Play around by adding BangPatterns based on educated guesses from any intuition you formed on how laziness works.
For understanding basic optimization (Space and time):
Try generating core for some Haskell code you want to know about. Tie what you've learned to core generated from Haskell code that solves one of your real world problems.
A bug report that is a good example of how to read core, at least it seemed easy to tie the very simplistic code to the expanded core in combination with the commentary.
The thing is that the amount of developers in the world is growing fast so most people need simple tools to get started. Once the field matures people will want to be able to express themselves more fluently, the whole reason languages like Haskell are getting more popular is because this is already happening.
Keep fighting the good fight! I'm happy to hear that you took the initiative to host meetups and participate in reading groups.
Haskell is beautiful in its way, but it isn't the only game in town. Try some other languages that are similar in philosophy but have different strengths than Haskell. You'll be a lot happier and you'll also stand a better chance of convincing your coworkers to try Haskell for some things.
It's like I accidentally stuck my hand in a running blender and someone asks whether I'm angry because of the noise of the motor or because the buttons aren't arranged in a reasonable order.
I tried learning about Haskell just so I could understand a flurry of articles I came across that were trying to explain monads. But mostly I found super abstract notation, something about bind and unit and >>=, and I came away just feeling dumb. I gave up pretty quickly, but to be fair I suppose I wasn't super motivated to work through it.
Later, I learned Rust just for fun. Options? Sounds like a good way to express uncertainty about a function call. Oh, and you can compose it with other Options using this function called and_then(), and things will short-circuit if they fail. Neat!
A year later I find out I had been productively using monads the whole time without even knowing it. It's so frustrating knowing how easy it could have been to learn this concept explicitly the first time around.
Go is successful because it's exactly what Google needs to write back-end code. It's (almost) memory-safe. It comes with a set of well-debugged libraries for doing most of the things you need to do on a web server. That's good enough for most web-related work.
From a language theory perspective, there's a lot to criticize about Go. The "goroutine" and "channel" thing turned out to be less useful for general concurrency than expected. But it's good enough to service lots of network connections. The lack of generics means that functions like "sort" are painful. But you can still get sorting done; it's just clunky.
Haskell hasn't taken over the world because it's designed by theorists for theorists. It's too clever, or, "l33t", like LISP. Also, anything interactive is dominated by I/O, while the functional model is better matched to pure computation. The IT industry today is mostly I/O dominated interactive applications.
(It's not clear yet if Rust is "too clever" for widespread use. The jury is still out.)
Some of the most fanatic Haskell supporters I have met have been university professors that have not written a single line of "production" code in their life. They thought Haskell would eventually rule the programming world because <insert cute language feature>.
That matches up quite nicely with what the article says. Haskell is well-suited for applications that have complicated computations going on inside of them, but which provide a relatively small surface to the impure world outside the program ("CRUD apps", as the author notes). And universities are a place where you come across a lot of computationally difficult problems. (Case in point: I learned Haskell in a lab course called "Haskell for Natural Language Processing", and the instructor noted that the NLP group does all their programming in Haskell.)
It's been a while since I last used Haskell but I remember its I/O story actually being pretty good. The fact that I/O must happen in the IO monad doesn't prevent you from doing it; it just gives you better tools to reason about it.
I wish Haskell "advocates" would give it a rest with banging the "purity" drum and tossing out gee-whiz examples of "elegant" code with no relevance to practical programming. I found it to be a very practical language, not without flaws, but also with powerful arguments in its favor for certain sorts of applications.
Without backing/promotion from Google, Go would be completely obscure and disused.
Google could promote a language that is a complete piece of crap and it would be instantly more successful than something well-engineered from some unknown hackers.
An interesting question is why Google did not start to use Haskell and promote it. Did they also think it was too hard or did they not see the benefits of it? Or something else
What about Dart(https://www.dartlang.org/)? Dart is developed by Google and has features that are tailored for or lend themselves to specific modern use cases i.e. IoT, mobile, and web applications. The language is only two years younger than Go but as far as I can tell has attracted nowhere near the community or adoption.
I can't find the reference, but I recall someone saying that Dart has huge adoption measured by revenue resulting from its use as an internal Google sales tool.
I'm gonna go and say Bullshit on this one. Python, C, C++ all had promotion from usage more than anything else and the same is true of Golang. It is a language that attracts a community.
I would rather thank Google for letting their engineers use it and develop it than cast unproven aspersions.
Since Python and C++ were not successful due to promotion by Google (but for other reasons), you do not have a logically deductive counter-argument here.
Your counterexamples speak nothing to the validity of the proposition "if Google promotes something, it will likely be successful".
They are counterexamples to the ridiculous proposition "If, and only if, Google promotes something, it will be successful".
Can you say a little more, or provide pointers to resources that you find credible, about "The 'goroutine' and 'channel' thing turned out to be less useful for general concurrency than expected"? I'm less interested in criticism of Go-the-language, and more in general criticism of CSP, or of CSP-as-implemented-in-Go, whichever you think applies.
You will have a hard time finding it, because nearly all "mainstream" language use that subset of CSP.
I am still working on explaining it well myself, but i am working on trying to write blogpost about it.
My main problem with CSP concurrency is simple : synchronisation. That does not work in face of possible errors. Period. The only way to make it work without breaking the laws of physics so much is to bet that there is no hardware errors and that everything stay single threaded.
Generally, look at the body of work against RPC and about consensus in distributed systems. Because your computer itself is a distributed system. It just happened that it hide it well for decades, but not anymore. We are even seeing timing problem between cores in some chip.
I agree on the general point that go is simple and familiar, but it's absolutely not memory safe ( and as you said, channel is really not working as well as expected in practice, unless you start using them as actor mailboxes, which i don't think was the original point).
Actually, seeing all those panic on null pointers was really my first surprise when i started coding in go ( and really first disappointment).
Panics on null pointers don't indicate that the language isn't memory-safe. Lack of memory safety means that misuses of memory go undiagnosed, with unpredictable results. (Not to mention that they are allowed in the first place, with features like arrays that blindly allow oout of bound access.)
A null pointer indicates a memory problem only if it is the field of a memory object that was wrongly overwritten with zeros, giving rise to the null value in an invalid way. If the null value was correctly assigned, then it isn't a problem in the memory safety category.
If Go diagnoses all null pointers with panics, that makes it safer than C, whose implementations require on the virtual memory hardware to catch null pointers. An expression like ptr->member in C will onlyl segfault on a null ptr if the member displacement is within 4096 bytes, or whatever is the virtual memory page size. If memb is offset beyond the unmapped zero page, then some memory is silently accessed or overwritten. Not to mention that on systems without VM hardware, ptr->member just silently works when ptr is null, for any struct type.
I don't program in Go, but as far as I know, panic is a kind of high level exception handling feature in Go. So it sounds like you should be able to write Go code which expects that null pointers will occur in some object field or parameters, and just use panic/recover to trap it all in one place instead. In C, you would need platform-specific coding to do anything similar, like catching a SIGSEGV in POSIX or a structured exception in Win32. Yuck!
Go is almost memory safe. There are two known areas where race conditions can break memory safety - maps (that was on purpose, as a possibly premature optimization) and shared slice descriptors (a design bug).
Haskell is one of my favorite languages and just wrote a book using Haskell (you can read it free online: https://leanpub.com/haskell-cookbook )
That said there are a lot of tasks that I use Ruby for (text wrangling), Python (machine learning), and Java (huge number of useful libraries).
I use Haskell for NLP, some web apps, and coding algorithms that don't require libraries that aren't available for Haskell. If Haskell had a huge set of 3rd party libraries like Java, then I would probably use it for most of my development.
Haskell Tutorial and Cookbook is one of the two books I've been using to learn Haskell (the other one being Haskell Programming, by Christopher Allen and Julie Moronuki). Thanks for the excellent book!
I kind of like Haskell, but the learning curve is brutal and the documentation is often intensely non-helpful if you don't want to learn a whole lot of mathematical terms.
i.e. when browsing and trying to understand parts of the XMonad code, I came across the "Endo" type. I Hoogle'd it, and the only documentation for it is "newtype Endo a: The monoid of endomorphisms under composition." Thats.... not helpful for me.
The biggest thing missing from Haskell docs is examples, I find. The mathematical language is tempting when you’re writing docs, because it’s succinct and precise, and can say a lot about how something’s implemented:
-- Endo is…
newtype Endo a
= Endo { appEndo :: a -> a }
-- ^
-- |
-- …the monoid of endomorphisms…
-- |
-- V
instance Monoid (Endo a) where
mappend (Endo f) (Endo g)
= Endo (f . g)
-- ^
-- |
-- …under composition.
mempty = Endo id
But not so much about how to actually use it:
-- If you’ve got a list of functions…
pipeline :: [Int -> Int]
pipeline = [(+ 3), (* 2), abs]
-- …you can compose them with the generic mconcat.
run :: [a -> a] -> a -> a
run = appEndo . mconcat . map Endo
run pipeline (-1) == 5
I agree Endo doesn’t add a lot of value. I tend to use “compose = foldr (.) id” for this, if it’s even necessary—most of the time my “pipelines” are not dynamic, so there’s no reason to put them in a list in the first place.
This brings to mind what I consider to be perhaps the biggest issue with Haskell (I'm not accusing you of this, you've done a great job explaining): due to language expressiveness, overly generic code is far too often a baseline, when it really shouldn't be.
The language itself doesn't demand we write code this way, but the community, understandably, loves to be as expressive as possible in many cases.
But all of this comes with a cost of cognitive overhead that seems to rarely be worth paying.
Elm was an enlightening experience for me, having started with Haskell, because it helped me to realize just how little I ever took advantage of a lot of the very generic Haskell code, and how far the simplest of functions and types could take you in the overwhelming majority of cases.
Yes it is a bit unnecessary in the simple list example above, but Endo becomes useful when using arbitrary nested Foldable structures and when composing monoids, e.g a pair of monoids is also a monoid. In other words, it becomes useful at scale.
To quote from Paul's post:
"The trouble is that very often, the sorts of examples that are easy to discuss aren’t of sufficient scale to reveal any major differences between A and B."
As an aside, it's worth noting that we bother giving `a -> a` a name other than `a -> a` here because there are other monoids for things of similar forms and there's overlap. For instance, functions of the form `Monoid b => a -> b` can be combined by combining their output, and that gives us a different monoid with `const mempty` as the identity.
OK, the basic idea is that you have two functions, `f_1` and `f_2`, of type `a -> a`, and you want to run them both on an input value `x` to get an output `y`. You can think of `f_1` and `f_2` as a data pipeline that `x` passes through and then the output comes out the other end. So,
y1 = f_1 x
y = f_2 y1
-- or,
y = f_2 (f_1 x)
But what if you have an arbitrary list of functions as your pipeline?
y = f_n (f_{n-1} (... (f_2 (f_1 x)) ...))
Well, the solution is that functions of type `a -> a` are composeable, so you can 'add them up' as if they were real values (which they are). The Haskell function composition operator is `.`, so:
y = f_2 (f_1 x)
-- is the same as,
y = (f_2 . f_1) x
Now, how do you go from 'composing two functions' to 'composing a list of functions'? Well, monoids are a typeclass (a statically-enforceable design pattern) that encode the idea of being able to combine two things into one, and 'for free' give you a function `mconcat` that combines a list of those things into one. So if you have a monoid instance for functions of type `a -> a`, you're now able to combine many of them together, get a single composed function, and apply your input value `x` to that.
`Endo` is just a fancy name for 'function of type `a -> a`'. I agree to a large extent that all this category theory stuff can get distracting. I personally think type theory is a much more rewarding field of study. To 'endomorphism', a type theorist would say, 'Oh, you mean `a -> a`?'
This. Documentation is the most frustrating part of the Haskell experience to me. More than once I've come across a module on Haddock and the module description just says
This module is inspired by the following paper: <DOI>
Reposting my message to evincarofautumn to you as well:
If you aren't planning on submitting that to those docs I will. In fact I'm sure many would find you adding examples along those lines to the most popular but math inspired Haskell libraries.
Hi, I'm not planning to, but please be my guest. One thing I would recommend is to put examples in a collapsible Haddock section as described in https://github.com/haskell/haddock/issues/335 so readers don't have to skim through it while looking up API references.
If, in such threads, a few posts up from the point analogous to this point, people just started talking about composing functions, and composing lists of functions, then more understanding would be generated.
Most people don't Immediately go to learning foundations like that without having an idea of what they'll get out of it first, is the problem.
For example, the poster you replied to can only see the advantage being that his question is answered. The tangible benefit for taking time out of the day to go and read those foundations needs to be made more clear I think.
“Endo” is a type constructor for functions that have the same domain and codomain, so if you have two values of type “Endo Foo”, for any type “Foo”, you know that they can be composed in either order. That composition of such functions is associative and has an identity, I hope it isn't necessary to explain.
Mathematical functions are taught in school. That function composition is associative and every set has an identity function, this is stuff that every high school graduate should know.
---
Reply to montatonic here:
I'm neither a Haskeller nor do I want to see Haskell “take over” the world. Some of my comments even got lots of downvotes from Haskellers.
---
Reply to sidlls here:
Yes, FWIW, I don't really agree with the use of the term “endomorphism”. It isn't even precise: the type `Endo` is inhabited by endomorphisms is one specific category (Hask), not endomorphisms in arbitrary categories. So “endofunction” would be both more accurate and less pompous.
As for “codomain”, I can't agree, though. This is really high school stuff.
That's been very far from my experience, when learning Haskell or teaching it. And as mentioned in the comment, the parent poster is not "a Haskeller".
> Mathematical functions are taught in school. That function composition is associative and every set has an identity function, this is stuff that every high school graduate should know.
As a haskeller myself I wholeheartedly disagree. I knew nothing of function composition until Haskell. I only heard about functions in high school once my senior year and it was very basic. Admittedly I went to high school in rural Texas.
I also never heard about sets, the word I was never used in my high school math classes.
> this is stuff that every high school graduate should know.
You seem to have a lot of faith in the lowest common denominator of high school graduates and I think it does you a disservice in being able to help others learn these topics.
I sense my last paragraph might have sounded a bit... Aggressive? That's not my intention.
P.S. I still don't really know what domain and codomain mean :)
Errr, what I meant is “if there exist high schools that teach this material (not necessarily yours), then there exist educators who think high schoolers can learn it”.
The jargon "monomorphism", "codomain", "endomorphism" and even "function" and "domain" as it truly applies here are most certainly not taught even as part of a typical high-school calculus program. I can't speak for IB (they only offered AP when I was in high school, so long ago). The basic, hand-wavy version ("a function maps a domain to a range") is about as complicated and deep as it gets.
Per Wikipedia, range is sometimes used for image (as you say) and sometimes used for codomain. That said, image probably is closer to the use in question.
Most high schools in my district, even the ones serving poor neighborhoods, taught at least through Trig/Algebra 2. Calculus was AP Calculus only (which one didn't have to be in an "honors" program to take).
If you aren't planning on submitting that to those docs I will. In fact I'm sure many would find you adding examples along those lines to the most popular but math inspired Haskell libraries.
I'm neither of those, and I understood it just fine. The term “endomorphism” might sound scary, but once you read the definition, which is just one sentence long, it's actually a pretty simple concept. Have fun cramming the definition of any object-oriented pattern into a single sentence.
---
Reply to sidlls here:
I didn't say that the term might sound scary to you. I said that the term might sound scary in general, to “average” people. Because, frankly, relative to its actual mathematical content, the term is perhaps a little bit too pompous.
Also, when it comes to mathematical definitions, I wouldn't say one really knows a definition unless one can use it in a calculational setting.
The art in writing documentation is in effective communication to a varied target audience, not in simply fulfilling the obligation to be technically correct.
Sure, I don't disagree. I've bashed my head countless times against short, super-abstract definitions for which it is very difficult to provide concrete examples. So I totally empathize with the feeling you express.
However:
(0) As for “endomorphism”, the definition “function with the same domain and codomain” should be pretty easy to understand to anyone. (Technically, in an arbitrary category, morphisms don't have to be functions, but Haskellers work in the category Hask, whose morphisms are terminating functions.)
(1) The right place to define what a monoid is is the definition of the class Monoid, not the definition of every type that happens to have a Monoid instance.
Who said anything about "scary?" You assume I don't understand them, or that "not helpful" is a substitute for "unfamiliarity" or some other kind of similar deficiency. That's not very charitable.
I learned the jargon as part of my academic background. That was part of the reason I made the comment. These terms are jargon, or terms-of-art. Knowing jargon isn't the same as knowing the things jargon describes or having the capacity to reason about them well. Similarly, not knowing the jargon does not imply ignorance, stupidity, fear, or any of a number of other negative things.
Honestly, I think for a lot of people who've used both Haskell and Elm, the answer is pretty clear: lots of Haskell in the wild can be very hard to read and think about, even with significant time invested in it; and when it simple to read, it's likely not using many more features than Elm is. Yet, even with a very strict subset of its features, it is a very expressive language: sum and product types are an incredibly powerful concept; I'm blown away by how much more straightforward managing state is with them. Expressive types and superb type inference generally make refactoring a breeze. There are just many huge wins overall.
But you can have the bulk of these wins in a simpler language like Elm, or a language like Rust which tries to only add more type complexity in cases where it would significantly improve code in practice, rather than in theory.
Haskell is necessary; it pushes the state of the art forward because of how experimental it is (compiler extensions). But it's pretty clear that the community is skewed towards research rather than industry, and culture strongly influences what a language is practically capable of.
This does not at all mean that Haskell isn't fully capable of industrial applications; we have direct evidence suggesting otherwise. But with a philosophy to "avoid success at all costs", it should be fairly clear that moving out of the realm of obscurity isn't exactly a goal of the language.
In the end, Haskell's ramp up time (to be productive) will slow it's momentum, and I feel that's with all languages with a ramp up time. Most popular languages have a pretty low ramp up time, and part of that is business concerns (getting a whole team to being productive in a language with a long ramp up time takes...well, time, and it's rarely the right option to have them make that transition), but also the personal, human side of it - we suck at being bad at things and the experience of it is hard to push through for people.
Actually, jumping back into mathematics outside of work, I'm having to go through this period of uncomfortable 'badness' and resisting my urges for the greater good. It is emotionally taxing to be 'bad' at something. I think that's part of why we favour languages that allow us to be productive quickly, despite it probably paying off in the long run.
I wouldn't do it with everything (you need quick wins _somewhere_) but it's a good skill to have, to be disciplined and persevere despite what your heads saying.
I'm no psychologist though, so this is largely conjecture and anecdotes.
And so the history of programming has been a series of advancements in both removing barriers to composability, and building new programming technologies that better facilitate composition.
This is like saying that the history of architecture has been a series of advancements in reducing the cost of structures. This is the fallacy of narrow focus a single sexy metric. (See below.)
Stage 4: Composability is destroyed at program boundaries, therefore extend these boundaries outward, until all the computational resources of civilization are joined in a single planetary-scale computer
Smalltalk tried to do this with the Image, which contained everything within it, including compiler, source control, and development environment. (Smalltalk started out as an Operating System.) However, what this actually did was to sequester the community in the boundaries of the image while the rest of the programming field was busy building varied forms of infrastructure to allow interaction.
I think this hypothesis also offers an explanation for why Go is popular, even though the language is “boring” and could have been designed in the 1970s.
Java won over Smalltalk because the Java community understood much better how to win mindshare. Ruby took over Smalltalk's pure object schtick because it could play nice with the rest of the programming world, and it had a killer app in Rails.
Go wins because it is exceedingly well designed for convenience in a myriad of ways where other languages drop the ball. You can have very fast incremental compilation in C++, if you are careful. In Go, it just works. You can have a good deployment process with Java, if you are careful. In Go, it just works. You can have easy to use concurrency in other languages, like C#, provided you find and learn the correct libraries. In Go, it's baked into the language, and people can learn how to use it competently with about as much effort as learning how to implement FSM using switch{}, and very good documentation and tutorials are dead easy to find.
A Lamborghini is way faster than a Corolla. A Ford F300 can haul many times more groceries. A Corolla doesn't have remotely near the potential for track day fun as an Ariel Atom. The above 3 qualities are sexy metrics, but in terms of overall utility and convenience for the largest number of people, for the most favorable cost/benefit, the boring old Corolla kicks butt. (That said, I'm pretty sure that someone can figure out how to disrupt programming in the way that companies like Tesla are poised to disrupt the above story. It might well be a "boring" version of functional programming that has many "just works" qualities.)
A programming language doesn’t need to “take over the world” to be successful. Nor does it even need to be “the best”, just serve a niche, as the article goes into. Haskell is the best language I’ve found for the software I write, so I use it. If other people aren’t using it, then I figure they have a good reason, or it’s their loss.
As someone who has done a lot of programming in Haskell, I'm glad to see that the limitations of composability across program boundaries is being recognized as a problem. (And it may be a problem that Haskell can't fix without some help from the operating system.) When you have to convert data to type "String" in order to send it over a pipe, and then convert it back in the receiving program, that sort of defeats the purpose of having a strong type system in the first place. Similarly, if you have a bunch of processes, each with their own IO Monad interacting with each other, it starts to have an uncomfortable resemblance to the shared mutable state that Haskell tries so hard to avoid.
As for the question of why Haskell isn't more popular, there are a lot of possible reasons. GC overhead and less control over memory layout makes it less performant than C/C++ and not really suitable for real time tasks. Some algorithms are hard to express without mutation. (The ST Monad is usually appropriate for those cases, but it's kind of cumbersome to use.) Laziness makes memory and CPU utilization hard to reason about. The learning curve is very steep. There aren't many books on "advanced" Haskell programming, or writing system software in Haskell.
One of the things I really like about Haskell is that you never stop learning new things, and new abstractions and libraries are being invented all the time. After 25 years or so, we're still figuring out new ways to program in Haskell. This has a couple of unfortunate side effects, though. One is that programmers can become distracted from the task at hand and direct all their effort at figuring out a better abstraction to solve their problem. A lot of great things come out of such work and sometimes it pays off it terms of productivity, but I think the common criticism that there isn't enough practical, "boring" software being written in Haskell is justified. Another problem is that it's hard for Haskell programmers to communicate with each other or understand each other's code when they haven't learned or are comfortable with exactly the same set of abstractions.
I do think Haskell is on to something good. I don't expect Haskell to take over the world, but I think in a hundred years people will look back and say that ideas from Haskell were a major influence on later languages that were more popular. Learning Haskell is probably a good way to prepare for languages of the future that haven't been invented yet, in the same way that learning Smalltalk would have been a good way to prepare for the object-oriented languages that came after.
I completely agree with the heart of this article, and its reasoning for Go's popularity (e.g. familiarity for people who know languages like C and Java). It speaks to my experiences, anyway.
I went down the Go path myself for a while, because it was familiar, but I found that it had many of the same issues I was trying to get away from in other languages. As the article points out, I also found Go "boring."
After searching for a while, I ended up wanting to use Haskell, but it was taking me too long to become proficient with Haskell's unfamiliar syntax and functionality; which is why, the author posits, Haskell hasn't taken over the world. I ended up using Elixir instead. It offered me the functional paradigm I was looking for in Haskell, and the ease of managing large-scale applications, while also offering a familiar and easy to learn syntax. I'm still studying Haskell, and may eventually employ it professionally, but for now, Elixir is working for me just fine.
I like Go specifically because it is boring. I really enjoy the simplicity of it, which makes it easy to read and understand the core packages. I feel like the language doesn't try to do too much and gets out of my way so I can concentrate on the task at hand.
I don't write large complex applications though, thus far I've used it to write programs that fit into Go's sweet spot -- high performance network daemons that do a lot of I/O.
Go will lead to a generation of "stupid" developers writing "stupid" code because they were told you don't need "this or that", then the next generation will rediscover things such as functional programming,generics or type classes. Or Go will evolve to something more complex, like every language do and early adopters will be pissed off things aren't like the "good old times" anymore as they are incapable of using constructs they don't understand.
Go developers already have that "dumb-down" reputation.
If Go adds generics, that's enough to enable many kinds of higher order functions without copying and pasting or ugly (and slow) reflection.
A generic map, reduce, etc. are what I need. Across the thousands of developers in our organization I'll bet we've implemented "mapString" or "mapObjectThing" dozens of not hundreds of times.
At least we could make a package for non built-ins.
Also lots of interesting generic data types we need across our org that are a pain.
With Java we have some data structure packages that Just Work and don't require copying and pasting.
Programming abstraction is elevating upwards. Language will play a less instrumental role. Apis are taking over. Ai is becoming a fundamental building block of any non trivial application. What you said could happen, but they are more possible to become even less relevant.
From what I've seen, it's still a leg up on the idiocacy of JS-only developers. I'd almost like to see anybody that wants to wear the title pass a proficiency exam in C first, as an absolute, rock-bottom baseline.
I hope Go does not "evolve" to support generics (whatever that is) because me and thousands of others have written Go code that produced value to many without it.
And others of us have wasted thousands of man hours because now we need an interval tree that works with date ranges and numeric ranges. Now we need to generate code (extra build step, slowing down development) or copy and paste (error prone). And then another repo adds another interesting use for this single data structure and that type needs to be supported.
And that's one data structure. We've got thousands of developers, many of whom are now writing Go. This comes up all the time and is not a niche use case.
At this scale it's quite costly.
IMO slapping on even basic Java / C# style generics does not harm you and if anything lets you optionally choose to use higher order functions or novel data structures you can't implement today without reflection.
Finding overlapping intervals and querying on those ranges. E.g. answering this question: on January 2nd, what payment schedules apply (more than one because different countries pay out differently)? Or: if I add this schedule, does it overlap another one? If so, which ones?
That's a date range based use case for this interval tree structure.
Another team is doing something similar query-wise, but with integer based ranges. I forget what their use case is, but our data structure is exactly what they need, but we either need to use reflection (unacceptable) or copy and paste (also garbage) or code generation (bad) to accept and return the right types.
This is trivial in Java, where I have a version of this written and used without complaint.
>I could be totally off but, if all the intervals were integer based would this problem go away?
No.
Even if you represent date ranges as pairs of integer timestamps in UTC (for consistency) you now need to do annoying and possibly bug prone casting and converting. Remember, code not written is bug free. Those tests won't write themselves. Also think of N dimensions; those come up often and are hard to handle with this input type.
The moment someone needs a pair of GPS coordinates that are doubles you're SOL. Which incidentally is a common situation with interval trees: find things in a viewport, intersecting roads, etc.
Even the tree itself is actually generic. You could build on a red black tree and using generics use an internal data type to order the tree with some extra book keeping and thus use the same underlying tree implementation for, say, an ordered set structure, interval trees, etc.
You can do this all now with error prone casting. I don't know if you've done Java pre generics but it is a vastly more error prone world than post generics Java.
Incidentally the lack of these other structures leads to lots of OrderedXYZTypeSet type code in our codebase that are generated to avoid cssting.
With generics we'd not only have one implementation for all types, but we'd likely open source it since the broader Go community could use it and add additional useful types and functions.
Go has very few built in data structures which exacerbates this problem. It at least partially has so few because the generics support in maps and slices etc. is basically a special cased hack since it's not baked into the language and is thus likely hard to maintain.
Intervals can only be "based" on ordered pairs of things with a total ordering and a finite computer representation, of which integers (trivially) and dates (only if represented with adequate constraints) are only two examples.
If a generic implementation of interval trees is not possible, the programming language is (deliberately) simplified, for the sake of stupid users or stupid implementations. Note that this is a very low bar: in C interval extremes can be void* and their total ordering can be represented as a function pointer int (f)(void,void*), while in old Java the interval tree could refer to a suitable "Element" interface, leaving cumbersome adapters and downcasts to client code.
I think that's just sloppy error handling, which is lame and unprofessional, but that's like saying you dislike Python because of all the stack traces.
... but I do dislike Python because of all the stack traces. Like 98% of the run time errors (and subsequent stack traces) I get in my Python applications could have been prevented by a proper type system.
> After searching for a while, I ended up wanting to use Haskell, but it was taking me too long to become proficient with Haskell's unfamiliar syntax and functionality; which is why, the author posits, Haskell hasn't taken over the world.
Similar experience for me. You can't read Haskell without learning Haskell.
Are you using the above languages for your day to day job? Or are you working on your own side projects? If the later, can you give some examples of the things you're building.
I've tried learning them several times, but mostly stop after finishing the basic syntax :/
I am not using Haskell in my day to day job, yet, but I am using Elixir, and have been since last April.
For learning Elixir, I highly recommend Dave Thomas' book Programming Elixir, The Little Elixir & OTP Book, by Benjamin Tan Wei Hao, and the website https://www.learnelixir.tv. You may want to start with the website and jump into the books afterwards. I learned with the two books, but one of my co-workers started with the website, and said he found it quite helpful. I watched the tutorial videos after the fact, and they are quite good.
The website https://www.dailydrip.com also has a lot of Elixir tutorials (similar to Avdi Grimm's website, Ruby Tapas), but you have to be careful because the older videos contain descriptions of features that are deprecated. I wouldn't bother with Daily Drip until you have a solid foundation in Elixir, and can pick around the deprecated bits. The later videos on Daily Drip are up-to-date though.
Ask yourself this: are/were the designers of Haskell aiming to create a popular/widespread language? No. They were not. Quite the opposite. They have succeeded well in their aim to keep it s niche language.
Perhaps a more reasonable question is why F# isn't eating C#'s lunch.
Haskell is pure functional language and it enforces purity. Haskell did not take over the world for the same reason as pure OO programming, pure logic programming, pure relational databases, or pure anything never took off.
I agree with the author that Haskell's unfamiliar syntax and functional constructs are why it hasn't taken over the world. I also think that Haskell's syntax and functional constructs are precisely why it should though. I also like the fact that Haskell isn't a mega-corp-owned technology, but rather grown by intelligent engineers to do intelligent things in intelligent ways.
I'm not the greatest Haskell programmer, but I love it. I recommend learning the basics of Haskell, if you haven't yet. Doing so improved my code in other languages quite a bit, so it was worth studying for that reason alone.
This is an interesting article about Haskell because it does concede that haskell might not be the best language for everything and other languages might be useful for certain kinds of apps though not exactly, the actual claim was that haskell was not superior enough to make learning it worthwhile for crud apps.
The big disconnect between fp/haskell devotees and other programmers is that everyone else doesn't consider it self evident that haskell or fp is better and the haskell and fp group does a terrible job explaining why they are better paradigms without making assumptions like 'all side effects are bad in all circumstances' or 'imperative programing is always worse way to impliment algorithms' without justifying them empirically and saying your paradim is better without justifying it empirically.
You could ask a similar thing to any functional language. The internet has a very wide variety of introduction points to imperative languages, and quite a bit fewer for functional languages. This means that the barrier to entry isn't just "can learn a language" but the vast majority of the time, you have to convince programmers who already know another language to adopt a new one (and in an entirely new design paradigm than they're familiar with).
Functional languages have to be penetrable to newcomers and offer huge benefits to veterans to switch, or they're going to remain nearly esoteric. Is mathematically provable code nice? Sure is. Is that enough to win people over? Well, language adoption says otherwise.
It has nothing to do with newcomers or skill or any of that. Often, we need to make computers do a list of things. Conditionally, sometimes, but nevertheless, a list of things. This is human thinking. A C function is nothing other than a list of things the computer does, one by one. Functional programming essentially throws rocks into anyone's way who thinks like this (which is pretty much everyone) and makes everything unnecessarily complex. There are few areas where functional programming is great, parsing and compilers come to mind. Other than that, in my eyes, it is mostly an academic pursuit.
All the time? Human beings, on average, are more apt to think imperatively by default than otherwise, unless they've been explicitly trained to think otherwise.
Well, every language is a product that has to compete with other languages. Nothing to do with composability or advancements in out-of-touch research areas.
And these days languages compete on a lot of things, on performance, reliability, security, productivity, on syntax, on tooling, on libraries, on deployment and supporting infrastructure, on interoperability. Syntax is not even that big of a deal, it's not hard to compete on it, you can just take Go's approach or you can do it thoroughly and properly, not forgetting about newcomers, their likely previous experience and designing the whole learning experience into the language too. You can't leave it as is though or it's going to be a huge disadvantage.
Main reason I don't make the jump into fully functional language, is the difficulty of refactoring. It's nice if you get the program structure right first time, but of course that doesn't happen and so the inability to easily pivot makes me stay away. saying that, I do get hit by bugs that come from state, and each time I wished I'd used the functional approach, but it's not enough of a reason to make me switch.
That's the weirdest argument I have heard against Haskell so far. Re-factoring is the single best thing in Haskell IMO. 90% of cases are 1/ change type 2/ fix fall-out 3/ done.
If I add a new member to a type, because of immutability, if I change the member it means I have to think about creating new instances all the way up the chain. And then in turn handling how that change propagates out. It makes for more hassle and time needed to make changes compared to just having mutable instances.
That's… exactly the opposite of my experience, and that of every Haskeller I know. Refactoring is 100x easier in Haskell than any other language I've worked in.
100 times easier than right-click refactor in a modern Java IDE? I beg to differ--if we are talking about your typical Haskell vim/text editor rig. For a language with such powerful types, it's like a gun with no bullets lacking a good IDE to take full advantage.
If your idea of "refactor" is "change name of variable", then I guess Java wins. However, the refactors I do are frequently much more complicated, involving logic swaps that are far beyond a right click action.
I've never used a modern Java IDE (as I've been fortunate enough to never have to program in Java), but still feel confident in the claim that refactoring Haskell is 100 times easier, yes.
Then you probably should, before making such claims. IntelliJ can do refactorings like the following:
Extract code into a function, automatically suggested by duplicated code detection. Inline functions back into call sites.
Convert imperative for-loops to functional style streams and back again.
Change function prototypes globally by adding or removing parameters.
Extract classes and interfaces.
Common sub-expression elimination (i.e. select an expression, introduce a new variable, all uses of that expression can be replaced as well)
Detect dead code and automatically remove it.
Refactorings across languages.
Replace inheritance with delegation or vice-versa.
Automatically generify code. Extraction of type variables.
Obviously a whole suite of structural code changes like renamings and other smaller things.
That's without getting into all the other code intelligence features.
Regardless of how fancy you feel the type system of a language is, these sorts of keyboard-driven refactorings are tremendously helpful for getting code and APIs right.
A few years and maybe ~50k lines of code, iirc. I've heard the exact same reports from people who work on multi-million-line Haskell codebases, though (e.g. Standard Chartered).
So you don't use purescript for writing code according to your definition? Ok. Anyway, that aside the haskell pushers out there really need to address why there's so very little in the way of application written in haskell. The article notes that haskell is used very successfully for compilers. The number of successes elsewhere is very, very much smaller.
Why? It's important not to just yell "rah rah" but actually analyse it as a problem and maybe, just maybe, you know, solve it.
Haskell is used successfully in industry, e.g. at Standard Chartered where they have >1MM lines of Haskell in production. Take a look: https://wiki.haskell.org/Haskell_in_industry
One critical assumption behind discussions like that is that popularity is strongly correlated to quality. But that just doesn't seem true in practice—popularity is the result of complex social dynamics and isn't strongly correlated to any intrinsic qualities of whatever becomes popular. We come up with compelling narratives about whey one thing gets popular and another doesn't after the fact, but these are just rationalizations; we can't use them to make good predictions and they do a poor job of representing the social processes involved.
We can see this in a microcosm when we consider music. What makes music popular? To a large extent, it's music that's listened to by the right people at the right time—perhaps seeded intelligently with exposure and marketing. There are some minimal bars the music itself has to pass, of course: it can't be terrible and it has to be accessible, but those aren't high bars to clear.
There are thousands of bands as good or better than most of the ones you hear on the radio but they don't get anywhere. You just never hear them, or they never catch on among your friends and never make inroads into your social network. (Or, more importantly, into the social networks of the labels that push music in practice.) Unless they do, in which case you have some unknown band "going viral"—virality says a lot about how something spreads and little about the thing itself. It doesn't even matter what you mean by "better": some objective notion of quality, musical sophistication, aesthetics, "catchiness", pertinent lyrics... whatever. That's not the main driver of popularity.
Nautilus had a great article[1] about this a while back, based on some experiments run with music. They created several large groups of participants listening to the same set of 40 musical tracks over time. People in the control group listened to music independently; the other groups all had social feedback mechanisms within the groups. The results were all over the place: the popularity of songs was completely inconsistent across groups, and could be traced to early chance decisions that snowballed over time.
The article had a great analogy about how the whole process worked:
> …a single match is not the entire reason for a wildfire starting and spreading. But that’s exactly how we naturally think about social wildfires: that the match is the key. In fact, there are two requirements: a local requirement (a spark), and a global requirement (the ability of the fire to spread). And it’s the second component that is actually the bottleneck: If a forest is dangerously dry, any spark can start a fire. Sparks are easy to come by, and are not intrinsically special.
The programming language in question? Its qualities? Pragmatism? Purity? Elegance? All just sparks. Which one starts the biggest fire depends far more on its context than the spark itself.
Because the entire programming model breaks down at I/O. When you have to compromise the language's _raison d'être_ to perform a basic function of the computer, the cognitive dissonance is real.
I don't want to be excessively dismissive, but I'm curious if you have a more-precise explanation for why you think that Haskell's programming model "breaks down" at I/O? I'd say that one of Haskell's entire purposes as a research language has been exactly about modeling, reasoning about, and representing I/O and other sorts of effectful computation in the type system. Unless I've misunderstood you or there's something more subtle you're referring to, I don't follow what you're trying to say here.
This is what always fucked me up with Haskell. To introduce printf debugging, you get sucked into the morass that is the IO monad, and all the clusterfucking that involves.
Apparently things have changed since 2007... Or else my Ivy league CS professors didn't know what the hell they were talking about. As I grow older and more cynical, the difference between those options continues to narrow...
I wouldn't be surprised at all if Ivy league CS professors didn't have in depth programming knowledge. They are Computer Science professors after all, not programming professors.
I recently finished a Linux course at school, part of which was focused on Vim. I got points taken away for using gg to jump to the top of a document and G to jump to the bottom. My professor said neither option would work. While I think it is reasonable to expect professors to know what they're talking about, it is fairly common that they don't.
The article is posing a question to those who believe Haskell is great. "Assume for the sake of argument that Haskell is great...then why hasn't it taken over the world?"
Anecdotally, I can't think of a case of someone who's used Haskell with proficiency and feels another language is generally better. But I can think of many cases of people who, confronted with the learning curve, set it aside to learn something else more immediately applicable.
Just looking at some bullet points, Haskell offers:
* A nice interpreter,
* Default immutability,
* A flexible and clear type system.
Default immutability does make reasoning about your program much easier. I don't think anyone would argue that but if you disagree it'd be interesting to hear what you have to say.
The absence of an interpreter in Go is a major nuisance, and makes it about as difficult as Rust to try out something simple.
Whether you like the type system of Haskell or not, it is pretty good for the kind of type system it is. There is a joke that Haskell libraries don't have any documentation and when people complain the Haskellers say, the types are enough. At a certain point you realize this is true: you get a lot more out of a Haskell type signature than a Java or C type signature, so much so that your approach to a library is generally "I'm looking for a function A -> B..." and you fish around by type. This is the dream of types: meaningful, machine-checked metadata.
It doesn't seem easy to reason about the performance of a program under lazy evaluation. Generally there are many aspects of a system that you may want to reason about, and making it easier to reason about one often comes at the expense of making it harder to reason about another. Pure functional code, for example, makes it easy to reason about what outputs are produced from what inputs, but that is one (important) aspect among many (and to take one thing that's really simple without purity and really tough with it - consider "pure functional data structures" vs data structures in imperative languages assuming a mutable RAM.)
Reasoning about pure functional data structures in the absence of laziness isn't that difficult, any more than the many mutable data structures that have copying/rebuilding/rebalancing that takes place only sometimes. The benefit of pure functional data structures is most visible when there is some kind of sharing -- concurrency of some kind -- in which case, reasoning about side effects becomes more difficult.
There are good approaches to handling in-place array update and similar operations in pure functional programming (the ST monad, for example), it's just not the default. As long as the side effect is somehow accounted for in the return value, referential transparency is maintained; we might say the return value is there to help us reason about side effects.
The gain from composition isn't that it's easier for our brains to comprehend; lots and lots of tiny pieces are often harder for our brains to understand than a much smaller number of larger pieces.
Haskell doesn't make it clear what things will process the fastest. For example in Java you can use hashmaps or arraylists depending on the situation and it is easy to optimize for run time. But I'd have no idea how to do this in Haskell.
Efficiency is a matter of using appropriate data structures and algorithms for your problem, regardless of the programming language. Unfortunately, pervasive laziness is a serious disadvantage here. (But being functional is not.)
Eventually I arrived at Haskell after strongly considering Clojure, and I'm very happy about what I've learned and my new way of approaching programming complexity. Unfortunately there is nowhere else to go from here since I can't find employment as a Haskell programmer. OCaml, F#, Scala, Elixir, and Elm all feel like a step back. Now I'm a Java programmer and quite miserable. I feel hampered by the language nearly everyday in terms of how easily I can express my thoughts in code. Haskell isn't perfect but is the best fit for my mathematical mindset.
I am trying to lead by example. I help host my city's Haskell meetup and contribute to a Haskell reading group. The path is lonely, my co-workers poke fun at me, and don't care to put in any effort to understand what I have to say. I love teaching and explaining things but there is zero interest because the machine keeps moving. Not many at the office even enjoy Java but are resigned to do it for our large pool of enterprise clients. All in all each work day is a void I put 8 hours into, which is fine, compared to most working conditions; it causes no suffering beyond the lacuna.
I'm past the stage of trying to convince other programmers anything. I recognize many are happy with their tools. I yearn for that happiness and don't seek to spread my misery. I offer my time to those who are interested and want to learn more.
Anyway, this article nails it, Haskell has advantages but they aren't enough to change things without the infrastructure the author is building. I look forward to being an early adapter of Unision and continue to remain hopeful for the future despite the long odds.