Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What Is Good About Haskell? (doisinkidney.com)
286 points by nuriaion on Oct 3, 2019 | hide | past | favorite | 421 comments


Friend of mine is always trying to convert me. Asked me to read this yesterday evening. This is my take on the article:

Most of my daily job goes into gluing services (API endpoints to databases or other services, some business logic in the middle). I don't need to see yet another exposition of how to do algorithmic tasks. Haven't seen one of those since doing my BSc. Show me the tools available to write a daemon, an http server, API endpoints, ORM-type things and you will have provided me with tools to tackle what I do. I'll never write a binary tree or search or a linked list at work.

If you want to convince me, show me what I need to know to do what I do.


I wasn't really trying to convince anyone to use Haskell at their day job: I am just a college student, after all, so I would have no idea what I was talking about!

I wrote the article a while ago after being frustrated using a bunch of Go and Python at an internship. Often I really wanted simple algebraic data types and pattern-matching, but when I looked up why Go didn't have them I saw a lot of justifications that amounted to "functional features are too complex and we're making a simple language. Haskell is notoriously complex". In my opinion, the `res, err := fun(); if err != nil` (for example) pattern was much more complex than the alternative with pattern-matching. So I wanted to write an article demonstrating that, while Haskell has a lot of out-there stuff in it, there's a bunch of simple ideas which really shouldn't be missing from any modern general-purpose language.

As to why I used a binary tree as the example, I thought it was pretty self-contained, and I find skew heaps quite interesting.


> > functional features are too complex and we're making a simple language. Haskell is notoriously complex

This is a true statement. (Opinion yada objective yada experience yada)

> In my opinion, the `res, err := fun(); if err != nil` (for example) pattern was much more complex than the alternative with pattern-matching.

This is also a true statement. (yada yada)

The insight I think you're missing is this piece right here: `we're making a simple language`. Their goal is not necessarily to make simple application code. That's your job, and you start that process by selecting your tools.

For certain tasks, pattern matching is a godsend. I'm usually very happy to have it available to me when it is. And I do often curse not having it available in other languages to be honest.

But Go users typically have different criteria for what makes simple/reliable/maintainable/debugable/"good" code than Haskell users have. Which is why the two languages are selected by different groups of people handling different tasks. You're making a tradeoff between features and limitations of various languages.

And the language designers have an even different criteria for those things. In this case, adding pattern matching would absolutely make the language itself more complex, and they apparently don't believe that language complexity is worth the benefits of pattern matching. I think that's a perfectly reasonable stance to take.


I'm not sure if I understand you: the `res, err := fun(); if err != nil` pattern shows up everywhere in most Go code, and I think that pattern-matching would be a better fit for it. Swift does it pretty well, as does Rust, both of which occupy a similar space to Go.

I get that there's a tradeoff with including certain features, I suppose I disagree that the tradeoff is a negative one when it comes to things as simple as pattern-matching, and I think it should be included in languages like Go.


I'm not arguing against pattern matching. Like I said, I prefer it where possible. I'm also not arguing in favor of multiple return with mandatory checked err values. (Though I prefer either over the collective insanity that went into making exceptions the default approach to handling errors in most languages.) I'm just pointing out that I think you're missing a key word in the stance of the go language developers.

They're not saying that if err != nil is better or worse, simpler or more complex, etc... than pattern matching for application code.

They're saying that supporting pattern matching makes the language itself more complex, and they're not in favor of that tradeoff. You're focusing on application complexity, and that's a very different thing.

Both the go language authors as well as the kind of developers that choose to use go think of the relative simplicity of the language itself as a feature. Even if it causes the application code to be slightly more complex. It's just another dimension that can be used when comparing programming languages, and one that group tends to value more than other groups.


Oh ok, I understand. I don't really buy the idea that go is a simple language, I have to say. A lot of go's design choices read (to me) as needlessly complex, like features were added to address corner cases one by one instead of implementing the fundamental feature in the first place. "Multiple return values" instead of tuples; several weird variations on the c-style for-loop; special-cased nil values instead of `Maybe` or `Optional`; `interface {}` and special in-built types instead of generics, etc. ADTs and pattern-matching would obviate "multiple return values", nils, and greatly simplify error handling.


A very instructive exercise for anyone who is or intends to be a software developer is to write some sort of interpreter and/or compiler. (As well as a virtual machine and/or emulator) Depending on your approach this can take a weekend, a few months, or the rest of your life.

For instance, and amusingly enough written in golang, one of the most respected recent books on this topic is `Writing an Interpreter in Go` and its sequel `Writing a Compiler in Go`. https://interpreterbook.com/ and https://compilerbook.com/ Both of these books are reasonably short, and have the reader make meaningful progress within a weekend.

Going through the motions of actually making your own programming language (or reimplementing an existing one) teaches you a lot of things you wouldn't otherwise expect about how to write general code, how to use existing languages effectively, and how things work under the hood. It's also one of the best ways to really get a practical feel for how to approach unit testing.

It's an exercise I'd recommend if you haven't gone through it already. It might make it really click for you why some features that seem like a no brainer and should be in every language aren't, and why some undesirable "features" are so prevalent.


> It might make it really click for you why some features that seem like a no brainer and should be in every language aren't, and why some undesirable "features" are so prevalent.

I hate this kind of "I have secret knowledge, why don't you spend T amount of your time on some big project to maybe come to the same secret insights I mean". If you have an opinion on why pattern matching is so complex and undesirable, just come out and say it please. Otherwise I'll just call you out as not really having an argument.


> I have secret knowledge, why don't you spend T amount of your time on some big project to maybe come to the same secret insights I mean

Alternate interpretation, I learned something valuable from doing this thing, perhaps you'd be interested in doing so as well since the book that took months or years to write will do a better job teaching it than I will in a five minute break while typing on HN.

It's always impressive when freely sharing knowledge and tips is somehow taken as being insular and exclusive.

> If you have an opinion on why pattern matching is so complex and undesirable

Where did I say pattern matching is undesirable? It sounds more like you just want a fight here.

Remember the HN guidelines:

> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.


> It's always impressive when freely sharing knowledge and tips is somehow taken as being insular and exclusive.

But you didn't share knowledge. You suggested that you had knowledge that was pertinent to the topic at hand. But you didn't share it. You did share tips for resources where one can learn more, and that's great. But you didn't add something like "... and that's where I learned that pattern matching is undesirable because <technical reason>".

> Where did I say pattern matching is undesirable?

This whole thread was about you saying that pattern matching was undesirable from the point of view of Go's designers or implementors due to their design goal of simplicity. Then you mentioned those compiler resources. The only reasonable interperetation for me is that you wanted to say that you did indeed know concrete technical reasons why pattern matching in Go would be complicated and therefore undesirable.


> This whole thread was about you saying that pattern matching was undesirable from the point of view of Go's designers or implementors due to their design goal of simplicity.

The only use of "undesirable" in any of my comments was in regard to features that are prevalent across languages today. If you must know I was thinking of inheritance and exceptions specifically.

As far as pattern matching goes, I was making no arguments except to say that I like it, adding a feature like pattern matching adds some non-zero amount of complexity, and that the go authors are apparently uncomfortable with that complexity. As I am not a go author, I am unsure of their exact reasoning and would not think to say why they believe that. My implication was not that I have an exact concrete reason for why the go authors feel the way they do. It was merely that I don't inherently disbelieve them when they say they have a reason.

In fact my exact wording was "I think that's a perfectly reasonable stance to take", which does not imply agreement, only a lack of strong disagreement. In other words I don't think they're ignorant of the matter or misrepresenting the situation.

> But you didn't share knowledge. You suggested that you had knowledge that was pertinent to the topic at hand. But you didn't share it. You did share tips for resources where one can learn more, and that's great. But you didn't add something like "... and that's where I learned that pattern matching is undesirable because <technical reason>".

The comment that appears to have gotten you riled up was after the person I was talking to saying they understood. After a discussion about language complexity I thought that it would be appropriate to suggest some resources on a "quick" project that can help build an intuition on that topic. And to be honest, it's a project I like to find excuses to suggest. I find people tend to be surprised at how easy and fun it can be to make some meaningful progress.

I understand that you would like for me to somehow short circuit that process, but I don't believe I am capable of building someone else's intuition by posting a throwaway comment on HN. Intuitions are typically built on experience and tinkering, not reading someone else's experiences.

That you view that project suggestion as a continued argument is unfortunate, I can assure you that was not my intent. Again referencing the HN guidelines, I encourage you in future to try to read people's posts first with the assumption that they are being genuine and only fall back to an assumption of malice when you absolutely have to. Long drawn out arguments over semantics don't help anyone.


Ah... in religion we call that "gnosticism". (Not really important, it just struck me as something weird to find in a HN thread.)


> A very instructive exercise for anyone who is or intends to be a software developer is to write some sort of interpreter and/or compiler.

Another exercise, perhaps less demanding in this regard, is to explore using Free Monads[0] to implement an EDSL[1] for a problem domain. Of course, the approachability of this varies based on the person involved.

> For instance, and amusingly enough written in golang, one of the most respected recent books on this topic is `Writing an Interpreter in Go` and its sequel `Writing a Compiler in Go`.

Queue obligatory reference to "the dragon book":

  Compilers: Principles, Techniques, and Tools[2]
0 - https://softwareengineering.stackexchange.com/questions/2427...

1 - https://www.quora.com/What-is-an-embedded-domain-specific-la...

2 - https://suif.stanford.edu/dragonbook/


Yeah I'm definitely not saying anything bad about the Dragon book here.

But I know there's a recency bias when people are evaluating tech books, so if there's a good book from the last five years I'll recommend that over a great book from the last 15, just so there's a higher chance of the recommendation actually being used.


No worries mate.

I mentioned the dragon book by obligation, not in comparison to the works you referenced.


If anyone is curious about an updated resource, I've found Modern Compiler Design much more approachable than the Dragon Book: Published in 2012, it includes chapters on designing object-oriented, functional, and logical compilers.

https://www.springer.com/gp/book/9781461446989


Hadn't heard about that one, thanks!


one of the most respected recent books on this topic is `Writing an Interpreter in Go`

Is 'recent' the key word here? ;) cause that is a very bold claim to make.



> Multiple return values instead of tuples

I remember having some bugs in Python due to one element tuples, I don't think I would have had the same issue if Python had multiple return value instead..


You keep missing the point entirely. Go was created to solve a very specific Google scenario: offer a valid alternative to C++ and Java for whatever they do in Google. It's not a language created to make college students or language hippies happy..if you are looking for that look somewhere else. Go can be picked up by any dev with minimal experience in C/C++/Java in 1-2 weeks and that was one the main design targets. Another one was fast compile times, adding all those nice features you'd like would also make the language more complex to parse and compile. I think you can talk about how much you like Haskell all day long, but if you keep using Go as a comparison you simply show you have no clue of what you are talking about. It's literally apples to oranges.


Maybe I am missing the point! It certainly wouldn't be the first time in an argument about programming languages.

I do understand, though, that the purpose of Go is not necessarily to push the boundaries of language design. I also understand that it's important the language is easy to pick up, compiles quickly, etc.

I think that some of Go's design decisions are bad, even with those stated goals in mind. Again, I don't want to overstate my experience or knowledge of language design (although I do know a little about Google's attitude towards Go, since that's where I spent my internship learning it), but some features (like "multiple return values" instead of tuples) seem to me to be just bad. Tuples are more familiar to a broader range of programmers, aren't a strange special case, are extremely useful, and have a straightforward implementation. Also, I don't want a bunch of fancy features added to Go: ideally several features would be removed, in favour of simpler, more straightforward ones.


I do agree, I would prefer tuples to multiple return in go.

Perhaps they find it easier to teach to users coming from languages with less or no type inference? Java and C++ programmers in my experience don't tend to be familiar with tuples, despite there being a tuple in the C++ stdlib. My purely uninformed guess is that it's because of how verbose declarations can get in Java, or in C++ without auto/decltype from C++11.


My best advice is do not try to learn functional programming via Haskell. It has done more to turn people off of functional programming that just about anything.

If you want to learn statically typed functional programming learn Elm (which takes only a few days), then one of F# or OCaml.

If you want to learn dynamically typed Functional Programming, learn Clojure or Racket/Scheme.

They amount of investment it takes to see return on learning Haskell makes it terrible for an introductory language. And every proponent of it glosses over this part. It has some advanced concepts but its not an introductory language.

There is so much benefit you can get to your coding from learning FP that you should pick a language that allows you to see and judge that value prop on your own quickly, not have to invest so much time to be haskell proficient to try to get the return on your learning.


Or go all in and actually expose yourself to an entirely different programming paradigm, there is so much more to "FP" that you can only find in Haskell and beyond.


> It has done more to turn people off of functional programming that just about anything.

The only thing worse in my book is FP evangelists. FP has some cool ideas and is an interestingly different way to do things compared with imperative programming, but enough exposure to the “FP is obviously the best paradigm and anybody who isn’t a fanatic is clearly just unenlightened” crowd will sour anybody on it.


I found Elm's type system to be extremely cumbersome in comparison to Haskell's.


That's because you already knew haskell and knew what you were missing out on. Elm having a less expressive type system makes it a much better introduction to concepts like ADTs and pattern matching, higher order functions, and enforced immutability, because there's less to trip you up.


So, I've been a Scala programmer for the last 3 years, and have been effectively writing Haskell in Scala reasonably effectively.

The pure FP Scala community understands your complaints more than the Haskell community.

https://typelevel.org/ is chock full of

a) useful utilities for actually writing applications

b) decent documentation. Not just pages of type signatures, but demonstrations of the libs usage.

Everything you need is here: JSON, config, units of measure, streams, testing, validation, a really nice JDBC wrapper (https://tpolecat.github.io/doobie/)

Throw in https://http4s.org/ and you've got yourself a rock-solid, purely functional stack with sensible, documented APIs, a more readable syntax and better tooling support.

I urge anyone who learnt some Haskell, thought "man this shit sucks, I'm never going to write something useful in this" to at least give FP Scala a chance. Here's a useful service template to start hacking with: https://github.com/http4s/http4s.g8.


I feel conflicted about this having programmed in both OO-heavy and pure FP Scala. On the one hand, sure if you want to write pure FP in Scala, some of the tooling is better than Haskell. Most notably the IDE situation with IntelliJ's Scala plugin has made leaps and bounds in progress the last few years and mostly handles pure FP Scala code just fine. And having access to the JVM ecosystem is an absolute god-send and huge boost to productivity. This is true even outside of library dependencies when coding. If you try deploying to production with Haskell, there's often a large gap in production tooling (monitoring, diagnostics, GC tuning, etc.) when compared to the JVM.

On the other hand Scala has its own share of infelicities when it comes to pure FP. I've mentioned this elsewhere, but the core language of Scala, that is the language that is left after you desugar everything, is OO and the FP part is really just a lot of syntax sugar mixed in with standard library data structure choices. That means if you're coming from a pure FP background a lot of things will seem like hacks (methods vs functions/automatic-eta-expansion, comparatively weak type inference, the final yield of a for-comprehension, subtyping to influence prioritization of implicits for typeclasses, monomorphic-only values vs. polymorphic classes, etc.). Treating Scala as an OO language side steps a lot of these warts.

And then there's the social factors; the Scala community is split on how much it embraces pure FP and pure FP is a (significant) minority within the community. This carries over to the library ecosystem where things are basically split into Typelevel/Typelevel-using and non-Typelevel libraries. Many workplaces have a fear (well-placed and not so well-placed) of the Typelevel ecosystem. Years ago Scalaz was something akin to the bogeyman in some places. Cats has a bit of a softer image, but still comes off as an "advanced" library in the Scala ecosystem.

Most of the social weight I feel is behind the non-pure-FP parts of Scala. Sure some of the libraries in pure FP Scala have good documentation (but I don't actually think the situation here is far better than Haskell's once you leave the core Typelevel libraries). The ones with excellent documentation though live outside the land of pure FP (Akka, Play, Spark, Slick, etc.).


You know, I really agree with a lot of this. It's maybe not an unreasonable argument to make that maybe OO-heavy or Akka-ish Scala might be a better investment for a lot of people, considering the weight and momentum of the communities maintaining these, and these expressiveness of the language in these domains.

But I think if we're only talking about pure FP, or at least something close to very pure, I think you're still getting a better deal than Haskell, even in spite of all those quite legitimate downsides you mentioned. My own personal biases mean that I will always prefer pure FP to anything else (I personally didn't love Akka and Play when I used them briefly), but that's an argument for another day.


Perhaps much of what you describe can be attributed to Scala being a multi-paradigm language. As with other programming languages of this nature, supporting multiple paradigms can be both a strength and weakness, depending on those whom use it and their expectations.

Whether this is right/correct or wrong/incorrect is left as an exercise for the reader ;-).


It's not really "equally" multi-paradigm though. All of Scala's FP parts can be desugared into OO. The reverse is not true. This has persisted in Dotty, where proposals to e.g. make typeclasses an atomic entity have been superseded by proposals to continue to encode typeclasses with separate mechanisms. That's the annoying part when doing pure FP in Scala. You can sometimes feel like you're fighting against the grain in a way that isn't true when on the OO side.


> It's not really "equally" multi-paradigm though.

True. Most multi-paradigm languages are not equal in each paradigm with which they support.

> All of Scala's FP parts can be desugared into OO. The reverse is not true.

This is a bit of a strawman argument, as any functional programming environment can be implemented by, or "desugared into", an object system. Contrast this with the fact that mutable OOP systems cannot guarantee Referential Transparency[0] and your second assertion is proven.

However, this simply proves that Scala supports more than one programming style. Whether a given code base employs FP, OOP, imperative programming, or some mixture therein, is a decision left for the system authors and not the language. It is left as an exercise for the reader to determine if that is good or bad.

> This has persisted in Dotty ...

AFAIK, Dotty is intended to be a new experimental language. I do not follow its development nor progress.

> That's the annoying part when doing pure FP in Scala. You can sometimes feel like you're fighting against the grain in a way that isn't true when on the OO side.

I respect what you identify but do not agree with your annoyance. But that's just me.

0 - https://softwareengineering.stackexchange.com/questions/2543...


I don't think it's a strawman. Every language that has syntax sugar gets split by the community into the "core" language and the "sugar" on top. This is a very different comment than saying every programming paradigm can be implemented in terms of another. There are examples of languages where OO is a syntax sugar layer and FP is the core abstraction that OO desugars into. There are not many since for a variety of reasons people do not like doing this, but there's some.

https://programming.tobiasdammers.nl/blog/2017-10-17-object-... is an example from first principles in Haskell.

A more fleshed out version is O'Haskell (which unfortunately died out in the early 2000s).

Mutability can be built (or more accurately faked) on top of referential transparency as well through syntax sugar in a very similar fashion to how Scala builds FP on top of OO. Indeed this was the original impetus behind do-notation in Haskell, but it stopped short of trying to make do-notation look like ordinary Haskell code. If you had syntax sugar that elided the difference between do notation and ordinary equality and unified IO with normal types then you'd have mutability in your language implemented through syntax sugar on top of an immutable core language (you could call it automatic IO expansion to be cheeky that automatically inserts a call to pure in front of any non-IO code used in an IO context). In fact I could see a rather reasonable case to be made for such a construct.

Scala similarly "fakes" (not necessarily a bad thing!) a lot of stuff. This is how automatic eta expansion and special function instantiation syntax (the arrow as opposed to new Function1(...)) elide the difference between methods and functions in Scala and let the language pretend e.g. that methods are first-class entities that you can pass to another method (which is not true, they must be wrapped in a class first just like Java) and let you pretend that methods have the same type signatures as functions (when in fact methods have a special method type that can be polymorphic whereas function are always monomorphic. In fact you cannot write the true type of a method in a first-class way in Scala; it is a special type that exists outside of the normal type hierarchy that is referred to as a "method type" in the Scala spec). This is leaving aside the encodings of typeclasses, ADTs, fully polymorphic functions (FunctionK in cats), etc.

In all these examples it is the FP concept that is "faked" (higher-order methods and polymorphic functions respectively in the two examples) and the OO concept (method taking an ordinary instance of a class and generics in methods) which is fundamental.

Dotty is explicitly blessed as Scala 3. I would highly recommend keeping high-level tabs on it if you're a Scala programmer. You don't need to know the specific details of it, but note that Scala 2.14 will be built specifically with Dotty in mind. It is the future of Scala (https://www.scala-lang.org/blog/2018/04/19/scala-3.html). And it comes with a lot of goodies! I'm really excited about it. More importantly for this discussion the current encoding of typeclasses in Scala 2 still desugars to implicits + OO classes instead of the other way around (which is perfectly possible where typeclasses are the core abstraction and implicits and OO classes are built using typeclasses).


Another great resource for doing FP in Scala is "Functional Programming in Scala"[0]. It's a very well written book and goes far in introducing key FP concepts IMHO.

0 - https://www.manning.com/books/functional-programming-in-scal...


So, I love me some ML-style languages, including Haskell, but I've also come to think that Rich Hickey is right about the real problems of business programming not being well solved by digging in on things like static typing.

For example, pattern matching against static types is cool, but, compared to pattern matching directly against data, Clojure-style, is even cooler. One makes the code a bit more concise and readable, but not necessarily a whole lot more maintainable. The other takes one of the more annoying and error-prone portions of my (say) Java code and renders it far more manageable.

There's a recent LispCast that talks about this a bit: https://lispcast.com/what-is-data-orientation/


I don't know how you can claim that static typing & pattern matching don't help with maintainability. In languages with exhaustiveness checking and good pattern matching you can often make a change in one spot and literally just follow the compiler errors to implement a new feature. They help massively in refactoring, and that's important for maintanability.


I find they don't help with maintainability in most my business applications because they're solving the wrong problem. It's sort of like when someone spends a lot of time carefully optimizing a piece of code that doesn't have anything to do with the application's actual performance bottlenecks, just because that's the part that's more fun to optimize.

The part of the business applications I work on that's a problem is dealing with the outside world. The data is messy. It's inconsistent. The protocols I'm using to communicate are invariably something horrendously loosey-goosey like JSON or XML. Stuff like that. And so, an inordinate amount of time I spend doing business applications in static languages ends up being spent on taking the messy, messy outside world and trying to create a clean, well-typed, rigorous façade for it so that I can operate on it inside my blissfully statically typed fantasy world. And all that static typing never seems to save me in practice, because the software quality problems I run into almost never crop up in the bits that I can operate on in a Haskell-friendly way. It's invariably in some mismatch between the outside world and my domain model that I failed to deal with accurately, which means that it's in my mapping code. Worse, oftentimes it's because of my mapping code, because my statically typed domain model ends up accidentally placing requirements on the input that aren't strictly needed by the business logic; I just unwittingly introduced them in the course of my efforts to get the types to line up.


In practice, this means you would rather nils permeate throughout your system rather than being caught at a system boundary, i.e., where you parse and validate that loosey-goosey outside world data.


Perhaps. I think it's worth considering.

In most the systems I work in, null permeates both the input and the output, and can often even have its own semantic value. i.e., "there is a value for this key, and that value is null" might actually be semantically distinct from "there is no value for this key". . . it's gross, but it happens, and whether or not it happens is often outside my control.

And I am beginning to suspect that it's actually safer, and even simpler, to fully live in and be fully aware of the reality of the business domain I'm working in, than it is to try and live in a bubble and pretend that life isn't complicated. And I'm also beginning to think that it might be wise to mind my own business. . . which includes not worrying about whether or not a value is null unless and until I find that I need to care whether or not it is null. Rejecting input because a value wasn't set when I had no intention of even looking at it is just such a grave violation of Postel's law. If I find that I'm only doing it because I need to satisfy the type checker. . . seems like a foolish consistency to me.

Perhaps if I could live in an alternate reality where things like JSON and MongoDB hadn't happened, and we instead decided that clean and consistent data is every bit as important when sitting on magnetic disks or traveling through fiber optic cables as it is when bouncing around in silicon. Oh, that would be wonderful. I dream to live in that world. But that doesn't seem to be the reality I occupy.


> [null] can often even have its own semantic value. i.e., "there is a value for this key, and that value is null" might actually be semantically distinct from "there is no value for this key"

That happens. Surely, if you know about this in advance, you can use a type along the lines of

data Field = Field Int | Null | Empty

And if you don’t have this kind of knowledge, well... That’s just a problem waiting for the right time to surface, whether you are using Haskell, Clojure or whatever else.

> And I am beginning to suspect that it's actually safer, and even simpler, to fully live in and be fully aware of the reality of the business domain I'm working in, than it is to try and live in a bubble and pretend that life isn't complicated.

I would’t say Haskell forces you to live in the bubble. Haskell forces you to think, in advance, about the relations between fields and types, sure. It doesn’t force you to use only simple, bubble-y types, though; the types can be something general, or some abomination (like the one above). I’m not aware of a use case where I wouldn’t be able to say “this will always be something”.

The only major distinction is, I would say, the place in code where you deal with the types. Clojure: In the functions all over the place (-), and for some fields, never (+). Haskell: Always in the topmost layer of your app (+), but you have to deal with all of them (-).

That’s the basic tradeoff between those two languages. Which pros and cons are more important depends heavily on your use case.


From the example you gave, it seems like we're talking past each other.

The systems I write also have to deal with JSON with nullable fields, and with fields I ignore while parsing. Aeson for instance gives you complete control over how strict or lenient you wish to be when trying to parse data.

The idea I was trying to convey was that if you care about marshalling data into types a la Haskell, then you can code less defensively when writing code for the data you actually care about. You do that defensive validation in just one place, as opposed to sprinkling nil checks all throughout your system.

If I'm understanding your system correctly, it sounds like you have unreliable input, and you want the same output, only with some of the fields updated if they're there. Haskell readily lets you do this too. The lens-aeson library is perfect for this.

Lots of examples for that here: https://www.snoyman.com/blog/2017/05/playing-with-lens-aeson

Also, I am not sure how you differentiate between a null value and no value, but whatever mechanism you're using you could also use to model those two different types of null as actual types in Haskell.

> Oh, that would be wonderful. I dream to live in that world. But that doesn't seem to be the reality I occupy.

It's hard[er] to discern tone through the medium of text, but it sounds like you're suggesting Haskellers live in some fantasy world where all the data is perfect and everything is pure.

I don't really understand that perspective. I make 100% of my income from writing online business software in Haskell. I employ other programmers to write Haskell for me too. We live in the same world you live in, but we might just approach it differently.


> And I am beginning to suspect that it's actually safer, and even simpler, to fully live in and be fully aware of the reality of the business domain I'm working in, than it is to try and live in a bubble and pretend that life isn't complicated.

Is agree.

Of course, if you can't describe the values in a business domain in types, I'd argue you aren't fully aware of it's reality. And when you've done that modelling, you aren't only aware of the reality, but you've encoded it so that the knowledge is preserved not only for operational use in the program, but also for anyone who reads the program.


I see roughly the type of issue you face. For the case where there's some good value/effort ratio to invent internal representations, Haskell's being "strong" on parsing tasks make it a good fit. When this ratio is less clear: I found lenses, albeit a bit foreign to typical Haskell code, are a really good addition (for me it's akin to a new language, like `jq` for arbitrary in-memory structures) for extract/read tasks.

Anecdotal evidence: I recently had to turbo-build a tool to generate statistics over five-digit numbers of very complex business objects (including dates, strings, IDs, boolean as whatever string, ill-named columns) scattered in a number of systems (some web apis plus some CSV extracts from you don't really-know where). Using 'raw structures' like Aeson's JSON AST with lenses was more than good enough. Lenses have typically solved the "dynamic / cumbersone shape". Then I had to create a CSV extratc with like 50 boolean fields, reaching to intermediate techniques like type-level symbols allowed to really cheaply ensure I was not mixing two lines when adding a new column. I could even reach to Haxl from Facebook to write a layer of "querying" over my many heterogeneous source to automatically make concurrent and cached fetches for it.

The main difficulty in this setup is to keep the RAM usage under control because of two things. On the one hand, AST representations are costly. On the other hand, concurrency and caching means more work done in the same memory space.

Overall, got the data on time at relatively low effort (really low compared to previous attempt - to a point that some people with mileage at my company thought it would be impossible to build timely). Pretty good experience, would recommend to a friend.


Having some kind of shell which cleans up the messy outside world before moving into the core is a good thing.


Rich is a smart person but I think he's missing some gaps in his knowledge of type theory and experience working with Haskell.

In one of his recent talks he made the claim that `Either a b` is not associative which.. well it is since it's provably isomorphic to logical disjunction which _is_ associative.

I thought what he might be looking for are _variant_ types which are possible to implement in Haskell but are a bit complicated for reasons. There are libraries for it or you can try languages like Purescript or dependently typed languages like Idris, Agda, or Lean.

Regardless I don't find his particular brand of vitriol appealing. If he doesn't really have a lot of experience working with Haskell like type systems, why does he feel the need to have an opinion about them?

To be fair I used to have a lot of the opinions pointed out in the article and reflected in many of the comments here. An old blog post of mine [0] muses on the utility of static types. I was seriously into Common Lisp at the time.

The problem with past me then was that I hadn't taken the time to learn and understand Haskell to form an opinion. I had learned Common Lisp out of frustration to win an argument that it as an old, crusty language that nobody used or needed anymore... and lost. I hadn't done the same yet for Haskell and would join the chorus of people repeating things like, "Haskell is an academic language but is not pragmatic for real-world use." It's embarrassing looking back on it.

I've learned enough Haskell in recent years to ship a feature into a production environment and teach a small group of people to hack on it. It's pretty great and I much prefer working with it than I do with weakly-typed or dynamically typed languages. The amount of work I can do with the amount of effort I have to put in has a great ratio in Haskell. The initial learning curve to get there is hard but it's worth it in the end.

[0] https://agentultra.com/blog/strict-types/


What's the difference between 'business programming' and other types of programming? I don't really know what distinction people are trying to make here.


Dealing with the messy real world with exceptions to rules and evolving shape of data vs writing compilers and other internally consistent closed systems.


If you have messy/dirty data then you just use an associative data structure like a Map in Haskell, just like any other language.


Sure, but idea is that the idiomatic way of working in the language accomodates passing around data that is not necessarily closed in shape. Ie intermediate functions will by default pass along also attributes that they don't have knowledge of, for example. And checking data shape conformance is customizable (by the "spec" system).


Ok, let's not do the runtime vs compile checks thing here. I was just pointing out there are options that solve similar problems. There are other ways to deal with sub-functions not needing access to the whole structure as well. But let's not expect Haskell and clojure to have exactly the same features.

If you want to use clojure, then go for it. Use what you want to use.


I think that, at least insofar as I understand the problem I was trying to speak to, it's so deeply entangled with the runtime vs compile checks thing that it's impossible to have a coherent discussion without dealing with the subject head-on.

Here's where I come down on it:

There are some kinds of projects where you can cut off most potential problems at the pass with compile time checks. In those cases, yes, you absolutely want to statically render as many errors as possible impossible. Compilers come to mind as a shining example here.

There are others where the nastiest bits invariably happen at run time, though. And, for a significant number of those, the grottiest bits fall under the general category of "type checking" - not checking types in the structure of the code itself, per se, but checking types in the actual data you're being fed. And, since you don't get fed data until run time, that means all that type checking has to be done at run time. There's no sooner time at which it's possible. There's some tipping point where that becomes such a large portion of your data integrity concerns that it's just easier to delay all your type checking until run time, so that you are dealing with these things in a single, clear, consistent way. If you try to handle it in two places, there's always going to be a crack between them for things to slip through.


I am sorry that Haskell and clojure people have to fight. You don't see me telling Clojure folks when and where to use the tools they enjoy working with.

I think Haskell is an excellent language for servers and API's. It really excels as a backend language. So, I'm sorry you think Haskell is only good for compilers, but I think the range of use cases it's good at is much broader than "Compilers".

Haskell is best thought of as a better Java. I wouldn't select it for every problem, but server API's and backend work is a really good fit.

Also I think Clojure is great. We can both co-exist in this world though. It is possible.

It's unfortunate the OP picked on python - it's not the style of post that I would write.


I am sorry that you somehow think this has become a Haskell vs Clojure fight. Me, I actually use Haskell a lot more than I use Clojure, and generally think it's a great language with a lot to offer.

But I also believe another very important thing: There is no silver bullet.

Because I believe that, I am able to recognize that even the things I love have some limitations. And I don't believe that this should be a fight, and that is why I think I should be able to articulate what I have found to be the limitations of a tool, and acknowledge that some other tool that other people like might have something to offer in this area. Without being perceived as a hater for doing so.


Others may see it differently but I see it as the kind of programming where:

* Bugs are typically caused by misunderstandings of requirements or something odd about the interactions between different systems, and rarely about internal logic.

* Where quick is often better than proven correct.

* Where requirements are in constant flux and where a lot of code is tossed out because it was the result of a failed experiment or an unwanted feature.


Accidental vs essential complexity is a similar concept:

> Brooks distinguishes between two different types of complexity: accidental complexity and essential complexity. Accidental complexity relates to problems which engineers create and can fix; for example, the details of writing and optimizing assembly code or the delays caused by batch processing. Essential complexity is caused by the problem to be solved, and nothing can remove it; if users want a program to do 30 different things, then those 30 things are essential and the program must do those 30 different things.

https://en.m.wikipedia.org/wiki/No_Silver_Bullet


The difference between solving a problem, and solving a problem for someone else for money.

If I'm solving a problem for myself, if it breaks in 'prod' then "Ooops", I try and avoid that, if I'm expecting other people to use it I will document public interfaces and write unit tests, but my focus is on scratching my own itch, not getting paid.

If I'm writing code that 1000s of peoples lively hoods, or millions of users buying decisions are being made on the stakes are higher. I might decide to use a more rigid language like Java, because the chances that I'm going to be given the freedom to replace rather than repair classes is slim. Similarly, if I persuade a client that microservices are the way forward, I'm going to spend significant time making sure we have a monorepo so each service has the same time line, an automatic deployment pipeline and I'm going to want to be able to defend my technical choices with economically sound data... and thats where Hickey kicks in. Many of the economically sound data sets are actually vapour.


As someone who takes sublime pleasure in writing types around a domain, Rich Hickey's rightness on this issue makes me profoundly sad.

Here's an article I read a while back (from the same author it turns out!) which nearly converted me to the dynamic-types camp: https://lispcast.com/clojure-and-types/


For me, that article and the lecture it talks about planted the seed, and then a year or two of doing data engineering type work watered and fertilized it.

That observation on `Maybe` really hit home in a big way. Not at first. Eventually. I used to think that banning null and using `Maybe` instead was the best idea ever. I still love the basic idea, and wish I could always work that way. . . but nowadays I'm so frequently working in the limit case, where everying is either optional, or used to be optional, or will be optional in the future, or is officially required but somebody didn't get the memo. And it's like in some Zen parable where the student keeps getting hit with a stick until they're enlightened. Bruised, bloodied, and enlightened. You either have it or you don't. む.


For me it was the part about typing JSON. I work at a company with a Common Lisp back-end and Let Me Tell You. Trying to enforce the JSON structure it generates using any kind of front-end JS types, so that nil-punning doesn't crash the UI every other day, is an exercise in attrition. Unfortunately JS is not Clojure and so can't as elegantly digest inconsistent data, but I certainly have learned to appreciate the appeal of that philosophy.


Hi _bxg1,

Cycorp? Do you need more Lisp developers? I'm trying to switch jobs.


> pattern matching against static types is cool, but, compared to pattern matching directly against data, Clojure-style, is even cooler

Maybe it's cool, but I think static types can be "cool" too. When you take the fashion statement out of the question, what are you left with? One is polymorphism at runtime and the other is at compile time. I'll take the compile time one.


why not both when applicable?


They do exist.

I use

http://hackage.haskell.org/package/optparse-applicative for small cli apps

http://hackage.haskell.org/package/envparse For any Docker microservice

http://hackage.haskell.org/package/aeson-1.4.5.0/docs/Data-A... For all JSON work. Sometimes I’ll use it with lenses which is massively powerful but a rabbit hole

I’ll use http://hackage.haskell.org/package/stm when dealing with parallel execution

https://github.com/brendanhay/amazonka For anything dealing with AWS

https://github.com/haskell-works?tab=repositories Projects for Kafka and avro

http://hackage.haskell.org/package/warp For trivial micro services or Scotty if more than a few endpoints

http://hackage.haskell.org/package/persistent For dealing with Postgres

http://hackage.haskell.org/package/parsec For dealing with any text parsing.

The tools are available, they can make things like cli apps and micro services trivial. However if you have never used a ML language before you will have a steep learning curve as very different to C style/based languages.

I was once of the opinion Haskell is academic, what can you use it for in the real world. Then I studied with it, played with it admittedly on and off over 1-2 years, hit hurdles where I had to think as so different to what I’ve learnt before. Eventually it clicked, it’s very hard and frustrating now in my day job using typical enterprise or popular languages. It’s not about convincing, it’s about having a open mind and wanting to learn something different


"Packages exist for doing this" isn't the same thing as "this is a good ecosystem for doing this kind of work" though.

I'm fairly convinced that Haskell is good at preventing the kinds of bugs which you might run into writing, say, a parser or other kinds of very complex logical code, but I'm less convinced that the nature of the language helps with the kinds of issues you get hooking together APIs, databases, etc.


Nearly all of those packages are best in kind, and most actually in a different league from anything that you'll get in another language.


You assert this, but the parent asked for it to be shown...


> show me what I need to know to do what I do

That's the only one occurrence I can find. I think the GGP made a good job of answering that.

I'm not going to walk over the differences as each of those would be a many hours task just for explaining.


Sure, they exist. But pointing out that they exist is not "show me what I need to know to do what I do."

Instead, the next high-profile HN Haskell link will be the thousandth demo/intro/tutorial that yet again implements some core data structure.

Rather than showing how Haskell is great for working with databases.

Or working with Kafka.

Or integrating with AWS.

Or parsing text.

Or running microservices.

Or any of the other hundred things that I'm going to do a dozen times before I need to reimplement binary trees.


Each of these things is well-explained by the documentation of the respective libraries. I'm not sure why you feel like you need someone to write you a long-form story in order to learn how to do these things; convince yourself of the merits that others already see.


I disagree. Look at the difference in documentation for Haskells Amazonka versus Clojures Amazonica. There are no simple code examples to get you going. Took me forever as a Haskell newbie to get DynamoDb integration working. In Clojure I just copy an example and play with it


Are the code examples in eg. http://hackage.haskell.org/package/amazonka-1.6.1/docs/Netwo... not sufficient?


The situation seems to have improved a bit since last I looked, but I still think it needs a basic howto about how to do stuff. I know you can figure most things out by looking at the types, but that's exactly where newbies lack experience, and you have search quite a bit for the information here, but thanks for the link. It's definitely better than it was


Yes, there is definitely no single through-line from "i know nothing" to "I can now program a microcontroller in Haskell" or whatever. It's a language which grew out of academia and still has a whiff of self-learning about it.


Because they're boring and nobody wants to vote them up.


Those look really promising. My boss is big on TDD and open to developing cutting edge technologies. I'm going to have a look and see if I can propose a project using these tools.


In case you're serious https://tech.fpcomplete.com/haskell/promote may help.


FP Complete is a consulting company. They promote Haskell, and they boast about getting hired to help a company analyze their code base to fix a space leak. I wouldn't trust my business to a language that requires hiring a consulting firm to do fundamental debugging. I've written memory leaks, but never needed to hire consultants to debug them.


Consulting is rarely about helping problems that too hard to fix by a good developer. They're often about helping problems that the customer doesn't want to put (or hire) good developers on.


Haskell is lazy by default, and sometimes builds up large unevaluated expression trees that need to be forced. Other languages are eager, but litter the code with abstractions like Callables and Futures and channels just to not do something.


Does Data.Aeson parse JSON as UTF-8 and numbers as decimal? You know, in real world business applications we don't use float/double or ASCII.


Of course. It uses the Text [0] and Scientific [1] packages under the hood. The internal ADT representing JSON is actually very simple (as json is very simple itself):

  data Value
    = Object (HashMap Text Value)
    | Array (Vector Value)
    | String Text
    | Number Scientific
    | Bool Bool
    | Null

[0] http://hackage.haskell.org/package/text-1.2.3.1/docs/Data-Te...

[1] http://hackage.haskell.org/package/scientific-0.3.6.2/docs/D...


Haskell makes you hate your day job, but doesn't offer you a better day job, because it is a tease of something better.


Sum types infinitely useful in regular, non-algorithmic code you find every day. Look at something like redux and redux actions, those are essentially sum types and would benefit greatly from a pattern matching syntax. The benefits for this stuff is literally anywhere.


Came here to say exactly this.

And also to point to this excellent article: https://pchiusano.github.io/2017-01-20/why-not-haskell.html


Joel Spolsky called it two decades ago: https://www.joelonsoftware.com/2002/12/11/lord-palmerston-on...

Until you’ve done Windows programming for a while, you may think that Win32 is just a library, like any other library, you’ll read the book and learn it and call it when you need to. You might think that basic programming, say, your expert C++ skills, are the 90% and all the APIs are the 10% fluff you can catch up on in a few weeks. To these people I humbly suggest: times have changed. The ratio has reversed.

Read the whole article, it's pretty amazing. Parts of it are just as relevant to mobile development today as to desktop development back then.


You might be interested in checking out https://typeclasses.com/. You have to subscribe, but they have some free content, including this nice section: https://typeclasses.com/phrasebook, which gets right into printing to the console, and working with state, multi-threading and mutation.


When I first studied compilers, I was amazed that writing a compiler used every other subfield of computer science I'd studied. It's the acid test of language design. A language that can easily be used to write compilers can do almost anything well.

And every non-trivial program I've worked on is 90% of a compiler. (You could describe compilers as just "some business logic in the middle", too, if you were in Architecture Astronaut mode.) You don't think HTTP servers use "pattern matching"? You don't think API endpoints would benefit from "ridiculously easy" testing? This is your bread and butter.

This article is showing how to implement if-statements and null-checks in Haskell, and reduce your code size by half. I bet you have some of those in your software. I'm not sure how this could be much more relevant, without reducing its usefulness by being overly specific.


You are conflating computer science (complete writing) with software application development.

In the real world, software has a CS "core", plus 95% boring data copying and gluing APIs together, where availability of libs and tools is far more important than theoretical correctness properties and the most general abstraction possible. This is why Haskell looks so nice in blog posts but terrible in a production system.

In Haskell, updating fields in a records is still an active area research.


Doesn't the presence of "general abstraction" allow us to write less of the "boring" parts? I thought that's the whole reason for it.

Isn't "correctness" a useful property, even for a web service? I think that being able to eliminate entire categories of bugs is terrific, in any situation.

Sure, it's always nicer to have good libs/tools than to have to write your own (though that distinction is much less important when your language has great abstraction capabilities). Are there any libs/tools you're missing in Haskell? The way your comment is phrased makes it sound like good old fashioned FUD.

I'm not sure what you mean by the last sentence. It seems to still be an active area of research all over the place. Look at Swift/Rust/Go/Java, or Postgres/Mongo/Datomic, or ext4/zfs/btrfs/ntfs. Everybody updates records in very different ways. It's not like all Algol-derived languages have the same data model.


> In Haskell, updating fields in a records is still an active area research.

In Haskell, everything is still an active area of research. Mutable data already exists in Haskell and has sensible semantics.

But the goal of Haskell (so far as I can see, at least) is good, reliable, maintainable code which produces good, reliable, maintainable programs. If this is not what you look for in a programming language then it might not be for you.


I wrote a library for that. Hooray you can version your types and seamlessly upgrade them and the compiler will never let you cross streams by accident.

https://hackage.haskell.org/package/DataVersion-0.1.0.0/docs...


This a 100%. If Haskell proponents focused more on stuff that is described in https://pragprog.com/book/swdddf/domain-modeling-made-functi... instead of algorithms a lot more devs could use it in their day-to-day job.


It's all there to be used, it's just unfortunately Haskell proponents seem not to talk about it as much. Most of the discussion is around "interesting" stuff.

My company does everything in Haskell, but it's almost all just boring plumbing code. APIs, JSON, databases, HTTP stuff, HTML templating, etc. It works great.


Exactly this. I've been meaning to write a blog post about what my team does with Haskell for the last 3 years, and while I think there are some gems of information we could provide to the world, at the end of the day it's incredibly boring stuff that works well for us and isn't really worth mentioning because it's not that different from what everyone else is doing with their own favorite language and tooling.


Can you share any of your toolchain and stack?


Sure. Here you go.

https://news.ycombinator.com/item?id=21024494

As for libraries we use heavily (in no particular order): Yesod, Persistent, Esqueleto, Lens, Lens-Aeson, Hedis, STM, Wreq, Shakespeare, Hspec.


It's the other way around. Haskell is good at plumbing, while real algorithms are better done in a fast low-level language.


Haskell is good at plumbing in research programs, not in production systems that require monad transformers or extensible effects.


No system requires monad transformers or extensible effects. In some cases they become useful, particularly when dealing with unusual computational contexts; but most of the time you can use IO, and sacrifice the sharpest edge of type-safety for an easier job of plumbing.


In the real world, you end up with logging, ways of progating errors, etc

All of which tend to introduce monads.

When you have multiple moands on the go you need some combining them.


Yes, but as I was saying logging, errors and so on can all be handled directly inside of IO. Given that you can't do any of these things in a pure function anyway, the only loss of dropping into the wider context of IO is type safety.


I learned Haskell and it's a giant waste of time. I'm not the only one who regrets functional programming either. Since then I vowed never to take things seriously that have zero connection to real world results. It's not possible to regret mastering x86 assembly for example, because even if that skill is relatively useless, it makes you better at many tangential things which can be decidedly useful; C and OS programming for example. Functional programming doesn't have this potential because it (proudly) exists as an abstract model with no connection to the machine or any physical processes for that matter. It's the string theory of computer science.

If you like math/category theory, go deep on the math itself. Your knowledge will be transferable to more than just some man-made story (like a programming language).


> Since then I vowed never to take things seriously that have zero connection to real world results. It's not possible to regret mastering x86 assembly for example, because even if that skill is relatively useless, it makes you better at many tangential things which can be decidedly useful; C and OS programming for example. Functional programming doesn't have this potential because it (proudly) exists as an abstract model with no connection to the machine or any physical processes for that matter

Hm interestingly I actually felt like learning Haskell had a huge benefit to my day job, probably much more than I imagine learning x86 assembly would. (Though admittedly I have learned some assembly in the past and that was also helpful).

I feel like Haskell forced me to write better code by forcing me to think about side effects. I don't know that I would actually use it in a production project because unfortunately real projects often rely a lot on state, even if constrained to a subsystem, and I still find it difficult to reason about the performance of a Haskell program.

Not trying to invalidate your point; perhaps you were already very good at what I learned via Haskell =] I admit I also find Haskell much more enjoyable to program in for the most part.


> I feel like Haskell forced me to write better code by forcing me to think about side effects.

I see, but wouldn't that be possible by "forcing" yourself to think about side effects in the language you were already using?

I mean, at the end of the day, it's learning the functional programming concepts that is supposedly going to make you a better programmer. Why not just start learning those in Java, Python, JS, etc.?

One thing I hear a lot from people who've learnt Haskell is they admit they're probably never going to use it in a real-world project (for so many reasons). Then, isn't it inefficient and a waste of time to learn parts of Haskell that are only found in Haskell?!

If I were to learn FP (which I will soon), I'd choose to learn it in the language I'm using now. It's not only more efficient, but also I'd enjoy being able to put what I've learnt in practical use.


It’s easy in python for example to rely on state without realizing you are. The Haskell compile forces you to account for it everywhere.


> I mean, at the end of the day, it's learning the functional programming concepts that is supposedly going to make you a better programmer. Why not just start learning those in Java, Python, JS, etc.?

That could totally be possible for many, but for me, I need something more concrete. I had read about Haskell and the benefits of immutability and agreed from a high level, but until I actually used it, I didn't feel like I understood it.


> I mean, at the end of the day, it's learning the functional programming concepts that is supposedly going to make you a better programmer. Why not just start learning those in Java, Python, JS, etc.?

Because every OOP/imperative programmer I've ever known for 18 years has the easy way out in not thinking about side effects or immutability and they never proactively reach for them.

Granted my bubble is not representative of the world but this trend is nevertheless quite telling. I also never proactively reached for FP techniques in OOP/imperative language until I learned my first FP language.


This is a very narrow-sighted perspective. Most application-level programming these days happens in languages that are "abstract model[s] with [little] connection to the machine or any physical processes". You just happen to be a type 2 programmer: https://josephg.com/blog/3-tribes/


One thing I've found interesting in my very limited experience of Haskell is the connection to formally verifiable properties. Programs written in C or assembly very often have correctness and safety problems—among many others, memory corruption problems. Haskell and many related systems help provide tools to check programmatically that various large classes of error condition can't be present in a code base.

You might think that this isn't worth the mathematical rigamarole that comes with it and that it's grown up with, but as people have seen in a number of other HN threads, formal methods are having a renaissance now and the tools we have that engage with them can get us a lot further than they could in the 1960s.

I have written many bugs, of many different kinds, in other languages that could have been detected automatically by a type system like Haskell's. I'm not suggesting that that makes the other languages inherently bad, or that other programmers (or I) couldn't adopt other methods that would also help avoid these errors, but I think the ease with which Haskell's approach can do it is something interesting to consider.


This is what attracted me to FP to begin with, the "if it compiles it works" meme.

Program verification and functional programming are separate things. Ada predates Haskell by at least a decade, and it's not functional at all. Rust is kind of a revival of that in proving memory safety via borrowing; not functional either.

But the set of programs which can be formally proven is smaller than the set of all programs, so I'd rather not miss out by only making formally proven software. (The entire field of deep learning is a good example of useful code which can't be formally proven.)


I totally disagree. Functional programming teaches practices that are helpful regardless of what paradigm you use. Pure functions by definition adhere to dependency injection and single responsibility principle. While most programming languages don't enforce immutability, being aware of mutation is a generally helpful skill. There's a reason that lambda have become table stakes for new programming languages, and that's because composing functionality is a generally useful feature. I have never been paid to write in a functional language, but learning from functional languages has always improved my general programming skills.


Doing FP makes you a better programmer because you're writing programs. I'm taking the null hypothesis that after a year of writing functional code you would have improved just as much as you would have writing non-functional code. I have literally never seen anything to demonstrate otherwise other than hand waving and personal anecdotes.

> Pure functions by definition adhere to dependency injection and single responsibility principle.

Dependency injection and SRP are other man-made constructs with dubious utility in the same vein as functional programming.

> There's a reason that lambda have become table stakes for new programming languages, and that's because composing functionality is a generally useful feature

Lambda in functional programming is supposed to be a primitive you use to do everything ahem y combinators ahem. In Java 8/C++11/Swift, the lambdas you speak of are used only as embedded subroutines.

Functional programming selects for better than average programmers to begin with--the "programmer's programmer" that writes code for fun and probably visits this site. You're unlikely to convince to learn Haskell the person who writes enterprise .NET, never used anything but Windows, and who never opened a text editor after work. The functional programmers were already good before they became functional programmers. Then you get the cargo cult type of intrigue. "X writes really good code. X also uses $FP_LANG! Be more like X!"


Yeah, this more or less completely misses the point of what a working programmer would need to be doing. I honestly can't remember the last time I had to implement a data structure for real, and not just for the hell of it because I was bored. The closest I come is generally some sort of domain-specific shell that composites some data together.

I feel like the F# community tends to be more grounded in reality. Or at least I'm more exposed to the side of it that is trying to popularize it in Microsoft, as a useful tool for Domain Driven Design and the like.


I'm relatively new to Haskell, but Yesod has a good reputation for web application logic, and moreover comes with a free book introducing both itself and Haskell: https://www.yesodweb.com/


I've been curious: what does it take to find a job in software that isn't just gluing things together? Where do you have to go to write interesting algorithms at work? In our golden age of open-source, does that stuff only ever happen at FAANG and research labs?


Those jobs exist, but are few and far between, since cool ideas tend to be packaged up into reusable libraries, and don't need to be redeveloped, while everyone else uses these reusable libraries differently, so you have a constant need for "sanitation engineers". Tensorflow or OpenCV or UnrealEngine get written once, then everyone uses them to build whatever they're building. You write some cool glue around these, that's the interesting code, then spend most of your time wiring everything together into a larger package that actually does something useful.

Once you've wired everything together, now you need to track version, build artifacts, manage release processes, test them and qualify them, etc, so you do a lot of QA and DevOps work.

QA, DevOps and sanitation engineering is so product specific that it can't be packaged up, it's a craftsmanship position, and that's why all of us spend most of our time doing this kind of work.


Yeah, that's what I figured. It's a bit depressing when you think about it. I guess for most of us that itch just has to be scratched by pleasure-programming.


I don't know how long you've been doing this, but I've been continuously at it for almost 30 years now, and no matter where, there are challenging problems to solve, particularly outside the "web app with a backend" family of products. Doing optical recognition of malformed chicken patties as they whiz by at 60mph on a conveyor belt sounds mundane, but it's a lot more fun than writing REST API's to a SQL backend.

In your career, you will go through new and excited, then sort of bored, then you will find a niche that's both interesting to you, and that you're good at - you become an expert in something. At that point, you do as much of that as you can, and the sanitation engineering doesn't seem so bad when its directed at something interesting to you, especially when you work with colleagues who are better than you at something and challenge you professionally.


> I don't know how long you've been doing this

Much less than 30 years :-)

Thanks for the insight/encouragement!


I look at this the other way around: if you’re only writing CRUD applications and relatively simple user interfaces for them, you tend to end up doing join-the-dots programming, but for almost any other field I’ve ever worked in, things get more interesting.

For example, we have a client at the moment who makes a type of device with a lot of user-configurable behaviour. An embedded web server allows access to its UI from a browser, and we were originally brought in to build that web UI for them. On the face of it, this is a substantial but straightforward SPA development, just one where the back end happens to talk to APIs that communicate with various physical components in the device rather than a traditional database.

However, it turns out that the way users view that device and how they want it to behave is very different to the way you have to program the various physical components to make useful things happen. That means even in this superficially simple project, we have some interesting algorithms in the front-end to present application-specific visualisations of the current state of the system, and we have a much more involved algorithm sitting behind that UI that converts the user’s tidy, intuitive view of the world into the very untidy and often counter-intuitive data formats required internally to program the hardware components.

To make this comment slightly more topical, I’ll also mention that the behind-the-scenes algorithm is essentially a form of compiler, taking a serialised version of the user’s world as input and running through a pipeline that systematically converts the data into the required internal formats. The first generation was written in Python, and has proven to be reasonably successful, but we are always a bit nervous about maintaining it just because of the number of edge cases and interactions inherent in the world we’re operating in. For the second generation, we made a big decision to go with Haskell instead, and for this sort of work, there were very welcome benefits including greater expressive power when writing data-transforming code and a strong type system that prevents mistakes like accidentally forwarding data from one pipeline step to the next without applying an appropriate transformation.

I agree with Beltiras’s point in the GP comment, and I probably wouldn’t choose Haskell to implement the kinds of software mentioned in that comment for much the same reasons. However, it definitely has real value in the sort of situation I described above, where we have both integrations with other systems but also substantial data crunching to do.


Everywhere! 3rd gen factory robotics, and manipulator control. Movie cg renderers and asset transformation pipelines (heck, get busy and finish inkscape!) Ehr systems. ERP systems. Risk management systems. Decision support systems. Trading systems. Field service support systems. Scheduling, packing, routing systems.

Now everything has an ui, and data innies and outies, but gosh! That's just connectors to the diamond!


Most people at Facebook and Google aren’t writing algorithms either.


What I'm about to say is a criticism of you, but please don't take it personally. It's also a criticism of me, because my day job is essentially the same as yours.

In general terms, almost everything you and I do is either a CRUD app, or something overcomplicated which would be better-implemented as a CRUD app. There's not technological advancement happening here. And usually you're not even doing a new CRUD app, you're just reimplementing an earlier CRUD app with better CSS and JS and a different marketing team to tout it. IF there's an innovation in your company it's not the CRUD app developer who's innovating. We're just reimplementing the wheel over and over, because the other wheel implementations are closed source and owned by a competitor.

If you want to innovate, you have to take on harder problems that aren't CRUD apps. That's where languages like Haskell shine. Haskell doesn't shine because it's better, it shines because it's different, and suited for different tasks. The tasks for which Haskell is suited haven't been saturated yet, so there's still room for innovating on the technical side of things.

So yeah, I can't show you how to do what you do with Haskell--the reason you'd want to use Haskell in the first place is to do something different from what you (and most other developers, myself included) are doing. The reason you'd want to write Haskell is to solve technical problems which haven't already been solved.

You're right to bring up binary search trees and linked lists as criticisms of the Haskell community, because those are also pretty solved areas: touting binary search and linked lists as the powers of Haskell completely undersells Haskell. Haskell learning materials fall into two basic categories: complete beginner stuff, and Ph.D. theses written in an alien language. This is, unfortunately, part of why there's any innovation to be had here: having very little mid-level learning material creates a barrier for entry that keeps people from reaching a level where they could innovate. The same is true for the communities of many less-widely-used languages.

This has been a rather cynical post, I realize. I'm not sure there's any recommendations here: I'm certainly not going all-in on learning Haskell and using it to innovate, myself. CRUD apps pay the bills and innovation is risky. I find interest and novelty in other areas of my life.


THANK YOU


I've been using Haskell for quite a bit in production. My personal take, as an engineer who is generally skeptical of fancy language features:

Plus:

- The type system. It can make your life a huge pain, but in 99% of the cases, if the code compiles, it works. I find writing tests in Haskell somewhat pointless - the only places where it still has value is in gnarly business logic. But the vast majority of production code is just gluing stuff together

- Building DSLs is extremely quick and efficient. This makes it easy to define the business problem as a language and work with that. If you get it right, the code will be WAY more readable than most other languages, and safer as well

- It's pretty efficient

Minus

- The tooling is extremely bad. Compile times are horrendous. Don't even get me started on Stack/Cabal or whatever the new hotness might be

- Sometimes people get overly excited about avoiding do notation and the code looks very messy as a result

- There are so many ways of doing something that a lot of the time it becomes unclear how the code should look. But this true in a lot of languages


I never really understand why I would want a type system until I learned Rust and was forced to learn it. Now I don't understand what I was thinking before..


What were you using before?


Mostly python and C. With python I didn't really have a type system and with C I never truly realized what the typesystem could give me.


I suspect one would learn appreciation of limited type systems from the other direction by using a typeless language.


I wouldn't. Why would you want to take errors that happen at compile time and create runtime errors? In dynamic languages you still have type invariants, it's just now they are invisible and can break your code at runtime.


A few things come to mind.

For a dynamically typed language, a REPL ends up being essential. In a lot of ways, REPLs can be a superior form of programming. Those are often much harder to get to work with static type languages (often too much ceremony around the types).

The other thing that comes up is sometimes those compile time errors are somewhat pointless. For example, in many cases the difference between an int and a long are completely inconsequential. But further, whether or not your type is a Foo with a name field or a Bar with a name field or an Named interface simply does not matter, you just want something with a name field. While static typing would catch the case of passing in something without the name field, it unnecessarily complicates things when you want to talk about "all things with a name field" (Think, in the case of wanting to move Foo, rename Bar, etc).

Then there is the new concepts you need to learn. With dynamic typing, you can write "add(a, b) { return a + b; }". But how do you do that with Static typing? Well, now you need to talk about generics. But what if you want to catch instances where things are strictly "addable?" now you are looking at constrained generics. But what if you want a specialized implementation? Now you are potentially talking about doing method overloading. What if you want a different style of adding? Now you might be talking about plugging in traits. Typing and type theory have a tendency to add a requirement that you learn a whole bunch of concepts, but also that you learn how to correctly use those concepts.

It is no wonder dynamic typing has it's appeal. Dynamic languages are generally low on ceremony and cognitive burden.

I say all this being someone that likes static typing. Just want to point out that dynamic typing has it's appeal. Obviously, the big drawback is when you come back to a dynamically typed language and you want to fix things. It can be insidiously hard to figure out how things are tied together and you get no aid from the language.


> For a dynamically typed language, a REPL ends up being essential. In a lot of ways, REPLs can be a superior form of programming. Those are often much harder to get to work with static type languages (often too much ceremony around the types).

Haskell has probably one of the best and most useful REPLs around.

> But further, whether or not your type is a Foo with a name field or a Bar with a name field or an Named interface simply does not matter, you just want something with a name field.

This is perhaps an argument for a structural type system, IMO. Though I completely disagree with it.

> Well, now you need to talk about generics. But what if you want to catch instances where things are strictly "addable?" now you are looking at constrained generics. But what if you want a specialized implementation? Now you are potentially talking about doing method overloading

Those same invariants are still in your dynamic code, it's just now they are invisible to everyone and will crash at runtime if broken.

> It is no wonder dynamic typing has it's appeal. Dynamic languages are generally low on ceremony and cognitive burden.

There is a low cognitive burden on the writer, but for every refactor afterwards, and anyone who wants to change your code later, the cognitive burden is higher.

Dynamic typing does have an appeal, but it seems to be shrinking these days while people wake up to the benefits of static types. And it's no wonder, they just make sense from a pragmatic perspective.

Thanks for your post. I mean, I disagree with almost everything you said, but it's interesting to hear the perspective.


> Then there is the new concepts you need to learn. With dynamic typing, you can write "add(a, b) { return a + b; }". But how do you do that with Static typing? Well, now you need to talk about generics. But what if you want to catch instances where things are strictly "addable?" now you are looking at constrained generics...

You lost me here. All of those "what if's" seem to apply equally to dynamically typed languages. If I want "a + b" to work with two Python classes that I just wrote, I'm probably going to have to implement __add__ methods on both classes, and possibly with non-trivial implementations. It's not like dynamic typing makes everything magically addable, with no burden on the developer. Wouldn't you agree?

Not to mention that languages such as Haskell and OCaml have REPLs too. They are not at robust as, say, Common Lisp's -- but REPL-driven development is hardly a stranger in the statically typed camp.

I agree that both camps have their appeal, though!


In typeless languages like forth, asdembly etc the debugging is more confusing than in Python, because you often see the situation blow multiple layers after the error happened. Or you may simply get silent wrong anwers. And in C, just the pointer vs value distinction in the type system makes code dramatically clearer and catches lots of errors.

This is how limited type systems are still very much better than no types.


> This is how limited type systems are still very much better than no types.

This is not what I was arguing against. I was arguing that I would be surprised if people were happy to go from strong typing to dynamic typing, not that 'limited' type systems are better than 'no types'.


Ah. My original comment was about how limited type systems are still useful, in response to someone not seeing their point when using them. Communication is hard :)


I would be interested in hearing counterarguments to this!


Not parent, but I’d be surprised if the answer didn’t include JavaScript, Ruby or Python.


Yep, GHC is an incredible compiler and Haskell is an incredible language. If only Haskell had a package manager as slick as Cargo, I'd use it for just about everything.


The great news is that tooling is rapidly improving these days. VSCode with the new ghcide is simply amazing. Ghcide is still new and thus has a few rough edges (eg. TH) but overall it lifts the Haskell IDE experience into the 21st century. Give it a try.


What about runtime stuff? I've found Haskell is very good at abstracting away your concerns about the runtime considerations and sometimes it will come back to bite you. I mean all that stuff you want to see into when you're running your app in production, like caching, function invocation times, threads etc.


Can't really comment on that - most of the code I use is built on the awesome Haxl[0] so we never have to worry about those things. I'm curious what other people think though

[0] https://github.com/facebook/Haxl


> Don't even get me started on Stack/Cabal

Is Stack not adequate? I thought the consensus was that it was ok. Is it really more horrible than say Maven or SBT or whatever in other languages?


I guess it's comparable to Maven, but people usually use Maven with an IDE, and Stack on the command line, so that's not very encouraging. Anyway, compared to something like NPM (Node.js) or Cargo (Rust), its user experience is extremely lacking.


I use Maven and SBT from the command line. In general I don't think the difference between a bad or good UI lies in whether it's command line based.

I'm not familiar with Cargo, but NPM gives me nightmares!


It's not about whether or not it's command-line based, it's that a lot of pain can be patched over by a good IDE.


This is a good discussion of a problem that suits Haskell well, but it's unfair to Python in some respects.

For example: I haven't read or written a lot of Python in a while, but would Python programmers really want to implement mutation in such a class by copying dicts around? The hand-wringing about "oh no, I wrote the condition as not is_node" is silly since one could just define an is_leaf method that can be used without negation. And "changing heapToList to return a lazy sequence makes it no longer return a list, oh no!" is just as silly, since one would of course not do that but define a separate heapToSequence (and probably base heapToList on that).

Also: "pattern matching and algebraic data types which have massive, clear benefits and few (if any) downsides". They have downsides whenever your data is not tree-shaped. Yes, a lot of data is tree, but then a lot isn't. I work in compilers, which is often touted as a prime application of ML-family languages, and this is very true, but not 100%. If you can have sharing in your abstract syntax tree (like a goto node that might refer to a target node somewhere else in the tree), you start to have difficulties. And you get even more difficulties when you try to model control flow graphs and the like. Nothing insurmountable, but still things where it's suddenly good to have other tools than only algebraic datatypes. OCaml is good in this regard.


You are free to use a Map in Haskell whenever it suits you. Nothing requires to to use ADTs when it would not make sense to do so.


Sure, I said the issue was not insurmountable. But (and yes, I know Haskell programmers won't necessarily agree) there are contexts in which following a direct, mutable reference is superior to indirecting through a map.


Sure, Desktop GUI programming tends to be one of those contexts. But I think those contexts are much rarer in Server APIs, and the internet has shifted the focus of programming from GUIs to APIs.

I wouldn't recommend Hsakell for a desktop GUI, but it's excellent in the API world. It's more of a backend language.


There are some really cool things in haskell-gi. Haskell has the ability to make some wonderful DSLs. Have a look: https://haskell-at-work.com/episodes/2018-11-13-gtk-programm...


Haskell is a beautiful research language. Its usecase is to supply subjects for PhDs and MScs, which it fulfills perfectly. Also it's extremely fun to learn and play with.

I would never bring it to the production though, reasons being:

1) Production code should be understandable by an on-call person at 4 am. If business logic is buried under layers of lenses, monad transformers and arrows, good luck troubleshooting it under stress. And real systems do break, no matter type safety.

2) It's a research language, and a big part of the research is about control flow. And therefore haskell has _way too many_ ways to combine things: monad transformers of different flavors, applicative functors, arrows, iteratees, you name it. And libraries you find on hackage choose _different_ ways to combine things. In the business code you probably want to combine multiple libraries, and you inevitably end up with unholy mess of all these approaches. Dealing with it takes more time than writing business logic.

3) Developers look at these fancy research papers and try to reproduce them. As a result, very basic things become extremely hard and brittle. I saw a real-live example when applying a transform to all fields of a record took a team 2 days of discussion in slack, because "writing this manually?! it won't scale to record with 100 fields".

4) Architecture is extremely sensitive to the initial choices due to isolation of side effects. Because if you suddenly need to do something as simple as logging or reading a config in a place where it wasn't originally anticipated, you're in for a bumpy ride.


I've done commercial Haskell work and it was fantastic. Almost all of the things you mention are problems with technical leadership and not Haskell itself. These problems are all very, very easy to deal with.

1) Many of us have also seen the C++ library or application that busts out the template metaprograms at every opportunity. Haskell is the same. Use restraint when bringing in new libraries and programming techniques. (as an aside, I have yet to encounter a single problem that I thought merited the use of arrows)

2) You also see this in other languages, though to a lesser degree. The same techniques we use with other languages work just as well in Haskell: As leadership, form a strong opinion on the best way to write control flow and ensure that everything uses a consistent style.

3) Are your engineers running around implementing random research papers? This has nothing to do with Haskell.

4) In practice, this is almost never a big problem. Refactoring a pure function to an effectful function is easy and safe. In a production setting, almost all of your functions run in some monad already, so it's pretty rare to run into a situation where you suddenly need to percolate effects through many functions.


> problems with technical leadership

I've never been lucky enough to get a full-time Haskell job (nor did I try), so my experience is based on single-man development & prototyping using the existing stuff found on Hackage. I presume that it may be different for a huge organization that can afford to build the entire ecosystem from the ground up while enforcing design rules (and these rules must be created by an exceptionally brilliant architect!). But in other languages you don't have this situation.

> Many of us have also seen the C++ library or application that busts out the template metaprograms at every opportunity.

C++ and boost set standards a bit low, to be honest.

> Are your engineers running around implementing random research papers? This has nothing to do with Haskell.

Not research papers, just the usual business logic. The problem is that research papers set the code style. Also they are not "my engineers".

> In practice, this is almost never a big problem.

It typically is, because changing internal implementation is reflected in the interface, and for each invocation of the function change has to propagate through _the entire_ call stack to some place running in IO monad or whatever.

> almost all of your functions run in some monad already

You defy your own argument. If that's how you design application, what's the point of bothering with side effect isolation then?


The bulk of real Haskell programs do not choose such granular restriction of side-effects that it's unmaintainable. As always, there are tradeoffs. In most applications, the code at application boundaries live in some Monad that are based on IO.

For Example, Yesod gives you the Handler monad, which is based in IO, and really just exists to provide access to runtime/request information, so at the API handler level you can do anything you need to. But what's nice is that not everything has to live in IO, and so in places where it makes sense, such as a parser, we can say that parser doesn't need IO, because that wouldn't make sense.

And my point here is that just having the ability to separate between IO and non-IO is very useful; we also do not have to split every single effect down into it's own special, separate type; in many cases that would just be overkill.


> You defy your own argument. If that's how you design application, what's the point of bothering with side effect isolation then?

I don't understand this.

If you have a monad like

  do
  ...
  _ <- doSomething arg1
  let result :: Result = pureFun arg2 arg3
  ...
How is that not simpler and better than having that pureFun invoke a bunch of side effects and perhaps even mutate something that you have to be aware of? How does this take away the benefits of side effect isolation?


Because as code evolves you may find you need some logging in pureFun and now you have to go and sprinkle some IO wherever pureFun is used, which may propagate to who knows how many other places.

Of course, I suppose there is always unsafePerformIO, at least for those late-night debug sessions.


> Of course, I suppose there is always unsafePerformIO, at least for those late-night debug sessions.

For temporary debug output you should probably consider the Debug.Trace module before unsafePerformIO. It has a simpler interface and doesn't introduce the possibility of arbitrary I/O, just text output (and event logging if that's enabled in the RTS).


This happens less often than you might expect.

If you're able to run the code locally, there is a function called Debug.Trace.trace which writes messages to stdout without requiring IO.

If it's a production issue, all you need to do is figure out what is being passed to your function. It's a pure function, so you can always debug it locally once you've got that.


Why would I need logging inside a pure function?


Because you suspect that it has a bug and need to see intermediate outputs to verify.


Haskell doesn't work that way at all. If you have variables inside a function, due to laziness their values may or may not ever be computed. Moreover, Haskell compiler sort of reduces the equations so the functions are not imperative and the computations would not happen as you expect them to. Logging would not get you anywhere.

However, you can just log the output if you like without putting IO into the function. I see zero reasons why anybody would push such logging into production. It behaves exactly the same way on your own computer as it would in production - it's just a pure function.


I know that Haskell's lazyness adds an extra layer of complexity to this problem.

However, disregarding the implementation details, here is the abstract problem: I have a program which is reading some complex data structure from IO, applying a complex, pure pipeline to it, and sending the reply back through IO.

Now, say the output is not correct in some way. Your only chance of debugging the issue is to somehow visualize some of the intermediate values, right?

This being a pure pipeline, you don't need to have logging in production. You could just log the input value, and when a bug occurs, run the same input value from the logs through a modified version of the original pipeline with some logging sprinkled throughout - you're right on this side.

Now, I expect that there are many situations where adding the logging in production is still preferable - maybe the pure pipeline is in development and some kind sof issues are frequently occurring, maybe the pipeline takes too long to re-run, so it's just easier to have the logs from the first run etc.


OK Thanks for adding some context!

What I'd do is simply break down the larger pure function into smaller pure functions and sequencing them in the monad.

This way you would still retain purity with your functions but you could log every intermediate step by simply logging their output (and its corresponding input). Would probably make the code more maintainable as well.

From what I understand Haskell has great tooling for function profiling. Unfortunately, I got no experience with these yet so I can't say if they'd help you solve such problems.

Edit: Btw sounds like property based testing might work well as a solution to this kind of problem. It works really well with pure functions.


That's right. Java is advertised as a "workman-like language" (today we should say worker-like) because it doesn't require a guru leader to get a working project built.

At my job, software is only have the engineering, we need to save half our brains for product/business issues.


Java is also advertised as "enterprise" language and it can't even encode in its type system that a piece of data can be missing. Or maybe it can, with Optional<T>, thanks Haskell.

I can't tell I had less problems understanding magic container IoC stuff than most Haskell concepts.


To be pedantic, Java perfectly encodes the information that a piece of data can be missing - any type supports null as a valid value. What it can't do is encode the information that a piece of data CAN'T be missing.


To be even more pedantic, Java's primitive types (int instead of Integer etc.) have this feature. They can't be null.


> maybe it can, with Optional<T>

Not even that. Java's optionals are almost useless, needlessly verbose, you still don't have pattern matching, and there's also still null to contend with.


Many others have replied to your concerns already, nevertheless allow me to add my own experience to the pile, as someone writing Haskell for food:

1) For a team of Haskell programmers reading Haskell code is just the norm. It's the same as for a team of Python programmers reading Python code. Also as a side-note: I never in my 15 years of professional life had a 4am call. Maybe I'm lucky, I don't know. In my experience automated tests, code reviews, sane deployment policies, etc... all minimise the chance of this ever happening. Haskell's powerful type system helps a lot here as it saves you from a whole array of problems upfront. I feel much more confident deploying Haskell code to production than code in any other language.

2) In commercial projects engineers tend to do things in sane and simple ways. This is not surprising really. Just cause you can does not mean you should. It's not Haskell specific but a general principle ie. KISS.

3) "Fancy research papers" at times can actually be really helpful. If you happen to have a problem that someone wrote a paper about that saves you time having to rediscover the solution. Having lots of research papers is imho a great strength of Haskell. That being said KISS from above applies here. If your problem is simple then do it simply.

4) Architecture in general is much less sensitive to change in Haskell than in other languages in my experience as the type checker guides you when you want to change your code. Refactoring in Haskell is pure joy. Re logging/configuration/etc... there are well-known patterns to deal with these as you would imagine, it's not like no one had to solve these problems in Haskell before. There are also tools (eg. Debug.Trace) in case you need some quick ad-hoc printf-style debugging. The same really as in any other language.

I don't want to write about all the pros of the language, others have done so already, just wanted to add my own professional experience to the mix.


Agree with #1. I think Elm is probably the best functional language in this regard. Contains all the important bits yet is small and opinionated enough to be easily readable.

If only weren't web only and run as a private project. An elm like language with llvm backend would be amazing


I think Elm gets old after a couple weeks when you realize how handicapped it is as a language. Personally, I think something like Reason/OCaml is a great place to start. Haskell is a good choice too but laziness adds an additional hurdle. The core of Haskell is actually pretty simple; it's algebraic data types, parametric polymorphism, and typeclasses.

> If only weren't web only and run as a private project.

Yeah, the way the project is run is troubling.


Elm is like the Go of the FP world. It really works for some people, but others find working in a kneecapped language infuriating.


IMO its the best language for absolute beginners to FP.


Elm is copy-paste language (too much, maybe can/will be improved)


I write Haskell commercially, and none of the problems you list are problems my team actually has in reality.


Your comment would be much more interesting and constructive if you actually took the time to refute the individual points instead of just saying that it's not a problem for you.


How do you deal with #4?


How is #4 even a problem to begin with?

If anything, it's the inverse: "isolation of side effects" means the architecture is more flexible and lego-like -- as opposed to a spaghetti mess.

And if you need to change interfaces, you are in for one of the smoothest rides, as the compiler will guide you every step of the way to implementing the change wherever it's needed.


> How is #4 even a problem to begin with?

Lets say you are making an RPG where you can equip items. You write a lot of pure functions and behavior for stat adjustments etc. Then the designer comes to you and say that items are no longer constants, every item needs a durability which goes down on use. In Java this would be a few line change, just add a new field in items and add a line to subtract it in relevant places. In Haskell you would need to bubble up the new item change to the global state and properly set it in each part of the code where you use items. If you aren't careful you can easily miss to properly bubble it up somewhere and the item doesn't get updated, or you might have the same item referenced in several places and now you'd have to refactor it to have a global item pool to since tracking down all places that should be updated is not feasible.


> In Java this would be a few line change, just add a new field in items and add a line to subtract it in relevant places.

And then find out that since items used to be constants all instances of a given type of item are actually the same item object, so modifying one unexpectedly affects them all. Or worse, sometimes items are copied and sometimes they're shared by reference, so whether they're affected depends on each item's individual history.

One of the benefits of Haskell is that it forces you to think through these aspects of the API in advance. A mutable object with its own distinct identity and state is a very different concept from an immutable constant and the design should reflect this. Changing one into the other is not an action to be taken lightly.


Not a Haskell expert, but I think I know what you mean -you are passing items by value to many functions and complain it's not modified in others, when you call them, or you need to compose flow so updates are properly "bubbled".

But with assumption that code was working before and you had for example function: use :: Item -> Item, and you change duration in this function, what else do you need to change to "bubble" new state? I don't get this.

BTW What's the problem with global store and passing only IDs ? I feel like this is probably valid approach and anyway ECS are implemented in similar manner AFAIK - https://en.m.wikipedia.org/wiki/Entity_component_system


As I understand it, a function use :: Item -> Item in Haskell guarantees that it can't change any property of any item;even more so if the function were use :: Item -> Effect, as could be done on the initial assumption that Items can't change.


In general you can't mutate records. Nothing prevents you from making a new one that's slightly different though. It just won't change the already existing one.

In practice this is not actually a problem. It just takes a little getting used to to get yourself out of the 'memory-address as identity' mindset that procedural languages have.


Can you provide an example (even in pseudocode, syntax doesn't matter here)?

I don't think that's how you'd handle a state change in idiomatic Haskell. What you propose sounds very error-prone.


> And if you need to change interfaces, you are in for one of the smoothest rides, as the compiler will guide you every step of the way to implementing the change wherever it's needed.

This is the same in any statically typed language. Actually the quality of Haskell/GHC's error messages are limited by constraint-based type inference and languages that have flow-based type inference do a better job IMO.


> This is the same in any statically typed language

Surely not in any statically typed language. Languages like Java cannot encode the same useful information (or rather, you cannot force them to) as languages like Haskell. Specifically, you cannot make them enforce lack/presence of IO in their types. Most mainstream statically typed languages cannot do that, in fact.


But it's considered good practice in Haskell to keep IO out of the main logic of the program and basically use it as little as possible. Haskell is certainly not an IO-focused language like Go and Rust. The IO monad is almost more of a deterrent than a tool.


That's only partially true. Of course a Haskell program needs to do IO to be useful. The IO Monad is also not a deterrent, where did you get that idea? Haskell is very much IO focused, in fact it's been jokingly called "the world's best imperative language"!

Even then, in Haskell you can say "this doesn't do IO" which you cannot in most languages!

It's also just an example of the expressiveness of the type system, which "most statically typed" cannot enforce or sometimes even express.


>Production code should be understandable by an on-call person at 4 am.

I don't like the "simple language means understandable code" argument being thrown around like it was some obvious fact. I'm currently working on a project whose major component is a byzantine home-grown message passing framework written in pretty much Java-in-C++ (old school OO with liberal use of shared pointers. Not that I would call C++ in any form "simple", but I do think that C++ without template magic is simpler than Haskell with GHC magic, or, say, any Lisp with a lot of macros). Maybe it's a case of grass being greener on the other side, but I'd gladly take a monad transformer stack or a DSL written in Lisp macros over doing the same song and a dance over and over again just to implent a tiny new functionality, or trying to deduce the logic from someone else's sea of boilerplate.

I mean, taking this logic to extreme, one could say that code written in assembly is easy to understand: it's adding a value to EAX now, then it jumps over there...

This being said, Haskell does have something in it that encourages one to design a ballistic algebra that runs in the type system before even pointing a gun at one's foot.


> This being said, Haskell does have something in it that encourages one to design a ballistic algebra that runs in the type system before even pointing a gun at one's foot.

Haskell is not even a total language, so type system doesn't magically divert gun from your feet. You can easily prove False in Haskell (there's a Prelude function for doing this!), enough said. And if you want types keep your feet safe, it's possible of course, at a _great_ cost, and not in Haskell.


I was referencing an old joke; that Haskell programmers allegedly spend so much time polishing their types that they never get around to actually write anything useful.


"1) Production code should be understandable by an on-call person at 4 am. If business logic is buried under layers of lenses, monad transformers and arrows, good luck troubleshooting it under stress. And real systems do break, no matter type safety."

Let's assume that your on call person is a developer. (If the person is not a developer, (s)he is not going to comprehend any language.)

1) If your application is written in Haskell, the on call developer would more than likely have been involved in the code base and reads Haskell. Your assertion about not comprehending it seems suspect.

2) You have a serious process issue if a developer is woken from a sound sleep to track down, correct, thoroughly test, and promote a fix for the bug in the middle of the night. The number of cases in which this happens should be close enough to zero to call it zero.

Your comment sounds like an attempt to cast Haskell as a purely academic language because it fails a (poorly) made up real world scenario. I'm NOT a Haskell-er so I don't feel a need to defend it. I do find the reasoning behind the criticism poorly thought out.


1) do you mean to imply all languages are equally parseable?

2) You’ve never worked in a company where a bug brought down prod and an on-call has to fix it?


1) No, that would be ridiculous. My point is that a developer working with Haskell daily to develop an application would have no problem reading it in the middle of the night. The biggest problem for a developer, who may not have written the module in question, is not understanding the requirements for the module.

2) Early on and even then rarely. After working in a hospital writing and supporting an electronic patient chart system, I found it hard pretending that anything short of life and death was serious enough to be woken over.


Do you have any evidence to back this up?

In my experience working in the largest Haskell team in the world, dealing with various applications and services in production: none of what you said is close to reality.


All the evidence is out there in Hackage. I backed up my points with some argumentation at least, and you dismissed it based on personal experience that we have no way of verifying. So, not changing my mind yet.


You have hardly backed anything up, I don't see any links in your original post. From my perspective, your thoughts are as anecdotal of that of the other poster.


What links do you need? If you know anything at all about the Haskell ecosystem, you already know where to find all these papers/sites/presentations about arrows, pipes, lenses and what not.


Yes, I know where to find descriptions of language features. That doesn't prove to me that they're often misused or that I might commonly have a problem with code that others have written because of it.

You've recounted your experience and detailed that you have seen issues with things in Hackage, and I won't claim your experience is invalid. I just won't let you claim your experience is that much more valid than someone else's without proof.


Maybe you could shed some light on how to deal with the issues mentioned?

Especially point 4 sounds very painful. I frequently have to add logging or special case handling at different places in our code base.


In case you need to add logging, you can always close your eyes and use unsafePerformIO anywhere, since from the view of the application, logging is just write-only side-effect.

It's all about how much do you isolate sideeffects from each other. If you go nuts with isolation, you loose flexibility. If you write all your code in IO monad (that means any code can have sideeffect), you are as in other languages where you can do sideeffect anywhere you want.


Thanks for the response. Logging doesn't seem to be an issue then, nice.

I'm still curious as to how to introduce variations though.

Frequently we find that we have to introduce "if (settings['special_x']) then DoSpecial else DoCommon" deep in the business logic to handle special cases that arise.

Assuming the existing function did not depend on the settings, how does one best introduce this dependency on the settings? In our code base we can cheat by effectively having the "settings" dictionary as a global variable, though we try to minimize this obviously. But in a pinch at 4am, that might be enough.

How about Haskell, is there a way to "inject" such settings without refactoring all the way to the top?


If you're injecting your settings at the root of your reader monad (which almost all Haskell applications are structured around), how many function calls are you between your monad stack and where you need to thread settings to? 3? Maybe 4? This isn't an issue in practise. It's a 4 line diff.

Source: 3 years of pure FP Scala in prod


unsafePerformIO will work. But again, why bother with side effect isolation, if you need to break it to do trivial stuff.


Of course you can sneak in Debug.Trace everywhere if you don't compile with -XSafe. But by doing this you basically break your own rules.


At which company is the largest Haskell team in the world?


StanChart, the Cortex team. Equivalent to the Slang/SecDB team at Goldman.


Then Haskell failed. To quote from Haskell 2010 Language Report, the primary goal of Haskell was: "It should be suitable for teaching, research, and applications, including building large systems".

Applications and building large systems are explicitly goals of Haskell. Haskell was never intended to be a research language.


"Avoid success at all costs" is also an explicit goal of Haskell.

>Haskell was never intended to be a research language.

Intended and actively tried not to be are two different things...


To clarify, it’s avoid “success at all costs.”

Rather than “avoid success” at all costs.

It’s a joke in the Haskell community that relates to the language’s early history and its ability to break things without much fuss. Things have settled down a lot more in the last few years, hence the shift in emphasis.


No, that is not an explicit goal of Haskell. To prove otherwise, please quote from Haskell 2010 Language Report.


You know, it's just possible that the Haskell 2010 Language Report is not the only valid statement of the "goals of Haskell".


> Production code should be understandable by an on-call person at 4 am. If business logic is buried under layers of lenses, monad transformers and arrows, good luck troubleshooting it under stress. And real systems do break, no matter type safety.

You can say the same thing about production Java code buried under layers of frameworks, factories, facades, proxies, aspects, annotations, etc. I don't see why readable, maintainable code can't be written as easily in Haskell as Java, C++, Python - you just have to make writing clean code a priority.


I think this points to a difference that I don't see highlighted enough - influential doesn't mean best.

If you are stuck on a plane are you going to watch citizen kane or something from the last couple of years. What about an Akira Kurosawa movie? They top film critics' lists of the 'best' movies, but really they are some of the most influential movies of all time.

Lisp and Haskell aren't the 'best' languages to use the vast majority of the time, but they heavily influential and have contributed a lot to computer science.


>4) Architecture is extremely sensitive to the initial choices due to isolation of side effects. Because if you suddenly need to do something as simple as logging or reading a config in a place where it wasn't originally anticipated, you're in for a bumpy ride.

Not a serious Haskeller at all, so not sure if this is a terrible practice, but for logging, `unsafePerformIO` could work in a pinch without changing types.


>Architecture is extremely sensitive to the initial choices due to isolation of side effects.

This is fundamentally why it's useless in practice. It requires you to anticipate/predict everything ahead of implementation.

It requires you to be perfect at Waterfall ( https://en.wikipedia.org/wiki/Waterfall_model ).

Real-world problems laugh at such idealism.


> This is fundamentally why it's useless in practice. It requires you to anticipate/predict everything ahead of implementation.

This is simply bs. Haskell is actually a language (that I know of) where, IMO, your initial choices matters least. And that's because it's gives you great refactoring experience.


'Great refactoring experience' relative to what? An imperative language?

Yeah right! I don't have to refactor extensible code.


Can you give an example of what you mean by "extensible" here?


Adding a new setter method to a data object buried in 10 layers of abstraction.

In Ruby it's changing the attr_accessor line.

In Haskell objects are immutable, so you need lenses.


To most (if not all) production ready languages out there.


Most languages out there including dynamic ones have built-in IDE support that tracks refactoring down to comments.

Don’t know what the state of Haskell refactoring is now but it looks like until very recently “great refactoring” was “change one thing and then manually fix compiler errors”.


Change one thing and manually fix compiler errors it's what I'm talking about.

Point me to something like that, for, lets say, Python. For example to error when I misuse return value of a function. Or when I decide that this string is no longer a string but a customer name. Or when my function will call IO operation, but it's not allowed to do so. Or when I add new field to a record and its not fully constructed somewhere.


This is what I dislike about Haskell discourse:

- We are a statically typed language with significantly better support for refactoring than anything else out there

- But...

- Let's say Python

Let's say Java, why don't we? A language with significantly better support than "change one thing and manually fix compiler errors".

See IntelliJ's capabilities: https://www.jetbrains.com/help/idea/refactoring-source-code.... (there are 23 subsections)

And even Python's capabilities are significantly better than manually hunting through compiler errors (though obviously not as extensive): https://www.jetbrains.com/help/pycharm/refactoring-source-co...


I guess I took the bait when you "included" dynamic languages, so your little conversation is misleading at best.

On topic: Oh, these are great and I wish Haskell had more of that.

But these are "dumb" actions, really. How about:

* writing terrible code to get something working, then breaking it down and putting it back up as something production ready

* changing core data structure and updating all the code that uses it

* solving some problems in separate projects and combining them together

* copy-pasting code from another project and adjusting it to a current project

All of the above of course without running the program once in between, without changing the desired program's behavior etc.

And no, I don't think you can't do that in Java, I just think that Haskell is better at it.


All of the above are a combination of of trivial automated code refactorings and some manual work because there's no silver bullet:

* writing terrible code to get something working, then breaking it down and putting it back up as something production ready

Automated: Extract method, move code, pull members up/down, rename etc.

* changing core data structure and updating all the code that uses it

Depending on what exactly you're changing. From as simple as automated "update type signatures of all affected functions and methods" to automated "rename" to yes, hunting down compiler errors

* solving some problems in separate projects and combining them together and

* copy-pasting code from another project and adjusting it to a current project

Once again, depending on what and how you combine, using the things from above.

There are more powerful tools than "oh, let's run the compiler for each change and manually fix all the errors". The flow should be "let's run all the automated refactorings and then run the compiler to see if those missed something". And these automated refactorings are often reversible with a simple Cmd/Ctrl-Z.

Ah, yes. And many of these are available even for dynamically typed languages (but not as powerful and sometimes with caveats, for obvious reasons).


Yes, Haskell is worse with those "trivial automated" (due to worse tooling, nothing about the language prevents that), but imo better with those "hunting down compiler errors". And I take easier manual work on hard problems over trivial automation any time.

Java and Python are simply not languages that encourage (or even make possible) writing code that is easy under those changes. In Haskell you have immutability (if something is declared, that's its final value, so I can simply move it around), purity (code that is basically context independent), dumb data structures (no "enterprise boolean" with nontrivial internal state and ceremony to initialize), functional (no code attached to mutable data that may implicitly require data to be in special state), no nulls in flow control (that are invisible to a compiler, so yea good luck moving function around that expects no nulls), ergonomic creation of data/types (so you don't have strings/integers everywhere that are opaque to a compiler).

There is no contest, really. Maybe with some modern languages like Rust or TypeScript, but I haven't tried those.


> And I take easier manual work on hard problems over trivial automation any time.

However, most changes/refactorings I usually encounter are due to trivial problems that can be automated. And I'd love to have those tools for FP where a lot of changes are still just that: rename a function or a field in a data structure, extract a function, rename/move a module, update type signatures etc.

However, it looks like the stronger the static typing, the worse are the tools :) Not enough manpower to take care of them, perhaps?


Yea... that's a trade off, unfortunately, and I won't pick sides here. Stronger static typing has so much potential for tooling, but none of that is reality, sadly.

As for Haskell, I think, it's a combination of manpower (so popularity, really) and the fact that the compiler wasn't developed with tooling in mind.

Thanks for conversation, stranger, and have a nice day!


And same to you, kind stranger!


> most changes/refactorings I usually encounter are due to trivial problems that can be automated

This is interesting because another popular argument against Haskell is "Haskell's type checker can only check trivial properties that can be automated".

Either automating trivial things is valuable or it's not, and the anti-Haskell camp seems not to agree on which!


Automating trivial things is immensely valuable. And yes, Haskell’s type checker can only check a subset of things.


Perhaps you didn't understand my point.

Being 'good at refactoring' is only a desirable feature if you are spending a lot of your time refactoring code.

Why are you spending so much of your time refactoring code? Because your language isn't extensible!


> Being 'good at refactoring' is only a desirable feature if you are spending a lot of your time refactoring code.

Not a fan of TDD, eh?


TDD has its uses. BDD has its uses. Code first - test later has its uses. YOLO has its uses.

Not a fan of silver bullets.


I don't disagree with the "no silver bullet" quote, but you can't answer every question with that phrase. I also agree re TDD, etc.

But your comment is still odd. Refactoring seems to be a core tenet of modern programming (regardless of TDD). In TDD in particular it's a central part of how you work, and so your remark sounds odd unless you also claim you don't believe in TDD :)


[flagged]


Do you understand the difference between extensibility and refactoring? For the purposes of flexibility extensibility is more desirable than refactorability (to me and my problem space anyway).

>because we are talking about your comment on Haskell.

I am not in the habit of commenting about things in a vacuum. I am comparing Haskell to other tools in my toolbox.

I am sure that in the Haskell universe (isolated from the rest of reality) Haskell is the greatest thing ever.


What do you mean by "extensible" and which language in your opinion solves this problem adequately?


I mean adding setters/getters to existing data types.

It's solved in any language in which mutable data structures don't require a Rube Goldberg machine like lenses.

e.g languages which optimise for expressiveness over purism.

Languages which give me the expressive power to disregard the 'law' of non-contradiction if I happen to think it's a good idea to do so for whatever reason https://repl.it/repls/RoundRelevantExternalcommand


Oh. Ruby.

In any case, adding getters/setters doesn't imply extensibility and doesn't apply to FP languages (or rather: in FP languages you can add more functions). Getters/setters are often an anti-pattern anyway. Maybe you have a better example? In any case it's very easy to "extend" types in Haskell, it just looks different to what you'd do in Ruby.

As for expressive power, are you familiar with the "expression problem"? [0] There's almost always a trade-off your language is making.

I don't understand what you mean by law of non-contradiction in this context.

[0] the expression problem: https://en.m.wikipedia.org/wiki/Expression_problem


Getters/setters may be seen as anti-patterns in the functional paradigm. They are seen as pattern in the imperative paradigm.

Of course I am making trade-offs, but so are you. It's just another word for making choices.

What I mean is literally being able to evaluate P ∧ ¬P as True, as an empirical test for a language's expressivity.

You can't do that in Haskell - it won't let you. The language is in control, not you.

In my problem-space not being in control is an anti-pattern.


I meant getters/setters are an antipattern in OOP languages. In FP there are no getters or setters, so the question is meaningless.

Ruby is not more expressive than Haskell. In fact, I'm pretty sure it's less expressive, because it cannot say everything Haskell can, but Haskell can say everything Ruby can.

You say you understand everything is a trade-off but then you make claims about lack of expressivity ;) Do you know the expression problem?

Still don't understand your non-contradiction example.


>Still don't understand your non-contradiction example.

>Haskell can say everything Ruby can

Go ahead and say P ∧ ¬P ⇔ True in Haskell.

Here is me saying it in Ruby: https://repl.it/repls/AccurateJauntyDimension


Neat. How did you do that? (Seems like something a language ought to forbid, anyway).

edit: ok, from what I've deciphered from your Ruby code, it's nothing special. You've declared a global boolean and every invocation of function "p" flips the value, so it's only a parlor trick and nothing to do with the expressivity of the language. Any language with side-effects and mutable globals will achieve this trick in the same manner. It's a trick because your first "p" and your second "p" aren't the same "p", so there's no real violation of the "law of non-contradiction".

What about the rest of my comment? The expression problem? That getters/setters are an anti-pattern in OOP languages (just ask Alan Kay...)?


> Seems like something a language ought to forbid, anyway

> What about the rest of my comment? The expression problem?

Navigating around the expression problem is precisely what I am demonstrating. A language ought not [1] decide for me what I am allowed to express or not. I have a healthy disregard for linguistic purism [2].

Expressive power is the ability to say whatever I want to, whenever I want to. That is precisely what a para-consistent logic buys me - expressivity [3].

This is counter-intuitive to most people who have been taught to value consistency above all else (mathematicians, logicians). Non-contradiction is an axiom, not a law - it's a false authority. A man-made deity.

> it's nothing special

I didn't claim it's anything special - I merely claimed that I can say it. I did say it.

If "Haskell can say everything Ruby can" then go ahead and say it in Haskell. I am not saying it's impossible - but I a curious how you might navigate the Turing tarpit [4].

>It's a trick because your first "p" and your second "p" aren't the same "p"

First p and second p? You mean p@time(1) and p@time(2)?

Are 'you' not the same 'you' as the 'you' from 1 second ago, despite the ongoing changes in your body?

Ironically, that is the disconnect between pure functions and reality. Side-effects are the norm, not the exception.

>That getters/setters are an anti-pattern in OOP languages (just ask Alan Kay...)

That's an appeal to authority. Alan Kay doesn't get to make decisions for me. He gets to voice his opinion - I get to evaluate the pros and cons. The choice is always mine.

Like I said - In my problem-space lack of choice/control is an anti-pattern. Humans get the final say - not algorithms.

[1] https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem

[2] https://en.wikipedia.org/wiki/Linguistic_prescription

[3] https://en.wikipedia.org/wiki/Paraconsistent_logic#Paraconsi...

[4] https://en.wikipedia.org/wiki/Turing_tarpit


You're being disingenous. You are not really breaking the law of non-contradiction. At first I thought you at least had redefined the "not" or "and" operators, but you merely wrote a function which alternates between True and False with each invocation, using global state. You've written "true ^ !false == true", which of course violates no law. Hence: trickery. You're not expressing anything novel.

This can be done trivially in Haskell making the global param explicit, and I'm sure with some trickery like the State monad you can even hide this. Maybe the syntax will be less neat, but this is a good thing: "hiding" things like side effects is bad.

The expression problem relates to your assertion about adding getters/setters, and judging by your reply it seems you didn't understand this...

So you're dismissing Alan Kay with no good reason. Furthermore, getters/setters have nothing to do with either extensibility or expressiveness. Why should I pay attention to your opinion?


And what about starting out with something like:

    data ParaBool = False | True  
-- | Both - if needed for three valued logic

Then implement your operators. Then everything is what it is defined to be. How can this be worse than sneakingly changing semantics of already defined things?

In a free world it should be legal to sell copper as gold, call the lies truth, and mix newborns in hospital - and people are just stupid if they want laws against these... ;)


Strawman. You are conflating denotational semantics [1] with operational semantics [2].

In the free world, a pilot is free to recognize that the 'already-defined autopilot' is doing something stupid/dangerous (despite all the green lights) and is able to take control of the system at run-time - he doesn't have the luxury of fixing this bug at compile-time.

In a free world people are allowed to "sneakily change their minds" when they see an obvious denotational error.

Beware of bugs in the above code; I have only proved it correct, not tried it. --Donald Knuth

Type safety is not the same thing as system safety. The latter cannot be formalized - that is why humans are in charge. Not algorithms.

Making laws against airplanes crashing is not the same thing as stopping airplanes from crashing.

Here is a relevant extract from this paper[3]:

"All ambiguity is resolved by actions of practitioners at the sharp end of the system. After an accident, practitioner actions may be regarded as ‘errors’ or ‘violations’ but these evaluations are heavily biased by hindsight and ignore the other driving forces, especially production pressure. "

[1] https://en.m.wikipedia.org/wiki/Denotational_semantics

[2] https://en.wikipedia.org/wiki/Operational_semantics

[3] https://web.mit.edu/2.75/resources/random/How%20Complex%20Sy...


> Strawman. You are conflating denotational semantics with operational semantics

But your trick is neither. The law of non-contradiction refers to the same proposition P in both positions, simultaneously:

> "To express the fact that the law is tenseless and to avoid equivocation, sometimes the law is amended to say "contradictory propositions cannot both be true 'at the same time and in the same sense'" [0]

Yours isn't "at the same time" because each evaluation of P (which in your case is a programming function with side effects, neither a proposition nor a mathematical function) depends on the other evaluation to cause a side effect.

So you're cheating, you haven't expressed anything innovative or outside the realm of other languages (including Haskell), and you haven't broken the law of non-contradiction.

[0] https://en.m.wikipedia.org/wiki/Law_of_noncontradiction


>The law of non-contradiction refers to the same proposition P in both positions, simultaneously

You are describing quantum superposition, but then you are straddling the classical and quantum paradigms when interpreting the LNC.

How many instructions/CPU cycles does it take to evaluate p ∧ ¬p? More than 1? Then what do you even mean by "simultaneously"?

>which in your case is a programming function with side effects, neither a proposition nor a mathematical function

You are making my argument for me. The LNC is an axiom of Mathematics/Logic. It's not a law as in a law of physics. And so if a physical system happens to violate it - it's hardly a big deal.

The real world actually contains things which are in two (or more) states simultaneously. We call them cubits and we use them to represent uncertainty.

What I have shown is a mutating getter. It's not supposed to be novel or original (I don't know why you are measuring me up to such ludicrous ideals). I am simply demonstrating to you a real-world scenario in which interacting with a system alters its state (deterministically or otherwise) which allows me to do the "impossible": evaluate p ∧ ¬p as True. It's just a race condition.

Side-effects are the norm, not the exception throughout the universe in general. Your classical computer would have no CPU registers, no cache, no memory, no persistence without side effects.

If the Mathematical religion frowns upon side-effects, why should I care about it when it clearly doesn't correspond to the universe I live in?

And if you are upset about my "trickery" (oh no! Physics is cheating!) here is an implementation that doesn't mutate global state.

https://repl.it/repls/CompassionateImmenseConditional


> What I have shown is a mutating getter. It's not supposed to be novel or original

A mutating getter doesn't violate the laws of non-contradiction, because a getter is not a proposition. In fact, if you use a getter at all, you're outside non-contradiction. The rest of your post is ridiculous, but feel free to keep equivocating this with quantum physics or cubits or whatever, instead of acknowledging that rather than breaking the law of non-contradiction, your real assertion is "my language can mutate global variables behind the scenes". Much less impressive, right?

Also, you keep avoiding to mention what your "problem space" is. I suspect this is because your "problem space" is rather pedestrian and doesn't require anything you claim it does.

Not upset, by the way. That's chicanery on your part. Usually done by people who know they are losing the argument ;)


Challenge accepted, by the way:

https://repl.it/repls/UnluckyBitesizedProcedures

(Trickery? No more than in your case)


Nice. I stand corrected.


>Architecture is extremely sensitive to the initial choices due to isolation of side effects.

Only if you choose to isolate all side effects from each other. Actually Haskell makes refactoring really easy, so you can adapt your implemention to changing requirements easily.


Why would I need to refactor anything in this situation in the first place?


In what situation? If you predict everything correctly, you don't have to, but I was referring that you don't have to have everything designed upfront, since refactor is much easier and safer than in other languages.

I wanted to make point that that #4 is not true.


Is that 1% not true or 99% not true?

Amortizing the up-front cost of design, into a continuous cost of refactoring every time the requirements change doesn’t make the original claim false.

You are still paying the price of rigidity.


Cute how none of this relates to the article. Why do you have a list of straw-man bullet points to rattle off whenever someone says "Haskell"? Who hurt you?


Those don't seem like "straw-man bullet points".


Fine, I'll bite...

(1) "Haskell is unreadable because the business logic gets buried under layers of technobabble." Not observed in practice, and not a property of the language anyway.

(2) List of many things that are supposedly about control flow. They aren't. Also implies that these things are incompatible with each other, which isn't true.

(3) Indeed, some developers try to reproduce research papers out of curiosity some of the time. That's a good thing.

(4) Mostly true, but true in any language, and not due to isolation of side effects.

These are textbook examples of straw manning: representing a caricature of the other side in order to easily argue against it. (But why? Who hurt the poor guy?)


WTF is your oncall person doing reading code at 4am? What state of mind do you expect this person to have at that time? What are they expected to do?

That's a huge red flag.

As for the rest.. I don't think the language was the problem at that team.


What are they meant to do if the world starts burning at 4am? Wait until a reasonable hour? Oncall fires at 4am suck but they're part of running critical services.


You have a runbook. Anything not covered by the runbook escalates to someone who already understands the code.


Obviously, I was talking about the situation when it escalated to that person. And if you are going to claim that you know exactly how your code works after a month, then I call bs on that. Not to mention that 1) people leave companies sometimes 2) 3rd party code can break too.


Not obvious.

Look, you've already fucked yourself by needing a 4am custom fix. That isn't a language problem.


It's not a language problem that you need a 4am custom fix.

It can be a language problem if the person who needs to do a 4am custom fix can't understand what the code is supposed to do.

Or to put it another way: I'd much rather be debugging some BASIC code at 4am than some Brainfuck code.


Only a person from the team who wrote the code to begin with should create an emergency fix. Since any developer from that team already knows Haskell, from having worked on the code base, the criticism is ridiculous.


We're all on the same team at work (there's only the six of us), but that doesn't mean I know all the details of my coworkers work. It's also not uncommon that the dev available at 4am is not the one who wrote the code himself.

As I pointed out in my previous post, I do feel the language can be a non-trivial factor in allowing a dev like myself to be confident in developing a fix for my coworkers code at 4am.

However I don't know Haskell, so I have no idea how it fares in this regard.


I can understand not knowing the details of your coworkers' modules. This has been true everywhere I worked. However.....

Where I work now, on-call was long ago relegated to nothing more than a routing task. The on-call person takes the initial call, figures out who best can solve it, and contacts the person. The guy on my team who writes only in Javascript and Python would NEVER be in the position to make a critical fix overnight in a Java module. For those of us who work in Java, reading the language is trivial but understanding what the module is supposed to do from a high level is non-trivial. People who work in Haskell daily would be the same.

This is what I tried to call out. A Haskell developer called at 4am would have no more problem reading Haskell code than I would have reading Java code. While a Javascript developer should NOT try to issue a critical patch in a language (s)he doesn't work in, regardless if the language is Haskell, Python, etc. Any organization requiring you to do so is poorly managed.


> This is what I tried to call out. A Haskell developer called at 4am would have no more problem reading Haskell code than I would have reading Java code.

Do you agree there's a spectrum here (with stuff like Brainfuck being on one end)? If so, I think it's a valid question to ask about a language.

Maybe Haskell is just as easy as Java, maybe it is slightly more difficult, maybe it is slightly easier.

For example, you can't really do embedded DSL's in Java, but you can in Haskell. If my coworker had used that and I had not yet had time to study it, could it be I would struggle a bit more with grokking his code? I dunno, at least to me it seems possible, but since I've not really used Haskell it might be entirely different.


"Do you agree there's a spectrum here (with stuff like Brainfuck being on one end)?"

I'm in complete agreement with you. My contention is that a person who works with a certain language, even Brainfuck, all day every day is going to be able to read it as easily as a person who works in a more "popular" language like Java: even if it's at 4am. Anyone who doesn't work in the language shouldn't be reading it with the goal of implementing a critical fix at 4am. That person should contact a team member who does work in the language.

I'm contending that the 4am scenario, as posed by the parent commenter, is a process issue not a language issue. If a manager requires that whoever answers the phone at 4am fixes the problem, even if the person has no knowledge of the language or requirements of the module, then his/her employees should be running to another job because that manager is bad for your health and livelihood.


Well yeah reading Brainfuck is rather easy after all, it's just a few symbols. Understanding what the code does and what it's trying to do is something else, and there I think language matters a whole lot.

For example, const helps me a lot when trying to understand C++ code. If I see a const reference, I know this bit of code I have in front of me can't modify whatever it is referencing. My language at work doesn't have const references, so I'm never quite sure if a call does modification as a side effect or not. This is the kind of stuff that I think can matter when trying to fix something at 4am.

Now, Haskell as I understand it is rather pure, so side effects like that are not a big issue. But on the other hand it allows for embedded DSL's from what I understand, which perhaps could be an issue if my coworker were to get creative. Though as I said I have no background to judge Haskell on this, I just thought it was a valid thing to be concerned about.


I think the same person who writes unreadable code in Haskell would write unreadable code in most any other adult language.


You write bug-free code? You verify correctness of distributed and concurrent code on type level? What approach do you use: session types, TLA, separation logic? Sounds super cool, that's some seriously bleeding edge stuff! Does your team have open positions?


No. You don't put yourself in that position that you need a fix at 4am. Roll back to a prior version, or shut the service down, or go into a reduced-functionality mode (e.g., readonly) as you can.

Give yourself time the next day to actually think about the problem with a clear head. At 4am trying to debug code, you're far more likely to cause more problems than fix things.

To me, I read this question the same as "Haskell's not useful when you're stuck in a well with a bomb ticking down." The language isn't the problem. It's the situation that's the problem.


Ok, I won't argue with that. If you don't work with mission critical systems -- sure, #1 doesn't apply to you. Play with any language you like.


Mission-critical systems use SRE principles so no one is debugging code at 4am. If you can't solve a production issue without looking at the source code, you'veong since failed at building a mission critical-system.


Even if your app is 100% bug free and fault-tolerant, sometimes an external system feeds it invalid data and you need to figure out how to contain and revert the damage. That was my last on-call involving reading a lot of code under time pressure (thankfully, not alone and not at 4 am).

All commenters who said "It's your own fault for writing crappy code, duh!" perhaps never worked with anything sufficiently complex and expensive.


I can't upvote your comment enough. The use-case for a developer implementing a fix after being pulled out of sleep should be life-and-death circumstances.


Are you saying you've never written code and then later came back to it, saying "Who wrote this shit?" and did a git blame, only to find out it was you?


If you can't read your own code, then the language isn't your problem.


Seems like someone never solved a complex problem.


Full on ad-hominem, eh?


and in comparison your comment wasn't?


Who was I attacking? Writing unreadable code isn't a language problem. Even in doc-poor environments, the names and sparse comments should be enough. If not, the language can't help you choose good names.


Good luck remembering complex business logic in your perfectly readable code and all external dependencies (DBs, other microservices, etc.) a few months after you last touched it, and in the middle of the night.

And after a few refactorings.

Most errors have nothing to do with “unreadable code”.


In the DevOps world, the person who understands the code is the person who got paged for it being broken.


How does that work in practice? In order to page, you should find out if it is the code, and it is indeed broken. I'm guessing all tests passed, so that won't be of much help anymore neither...

In reality, it would be a sysadmin confirming dependencies for this application to run (in the system, network, storage, ...) are functioning as required. If there turns out to be a problem with the application itself, there's no time for development at that point: you roll back. I don't see how the paging developers thing would be able to provide any stability. It's too late for that when you're in production.


In my reality I am the "sysadmin". I am also the "developer". I am the guy who wrote tests (unit and integration). I also configured the CI/CD pipeline.

I am also the guy who puts metrics in place to monitor the health of the system and if any metrics breach alarming thresholds the owner of the service (me!) is automatically paged.

How does a sysadmin know that something is broken anyway, and how does a sysadmin know who needs to get paged? If a sysadmin can make this decision - so can an algorithm.

Much of this is in Google's SRE handbook[1]. The entire notion behind the DevOps concept was to make sure that you don't segregate development and operations.

The people who write crappy code must be the people who wake up at 3am when the crappy code breaks.

>It's too late for that when you're in production.

Bugs will always slip through testing, and things will break even in production. Nobody is perfect - not even Google.

So what do you do when rollback doesn't work, you've tried everything in the playbook but the system/service is still down? Whose job is it to understand how to recover your service? Surely you don't expect the sysadmins to be doing that? They don't understand the system. How could they? They didn't build it.

[1] https://landing.google.com/sre/books/


Just yesterday we got a support call at 2am which quickly escalated to one of my dev coworkers. He had to dig through code to find the problem and push a fix at 3am.

The problem had to be fixed right then as further delays would cause heavy disruptions of the operation for that day, which would have been very bad for our customer.

While support can deal with most issues, it's not entirely uncommon that us devs have to step up and either help fix the issue by analyzing the relevant code, or actually push a fix in the middle of the night.


I've worked at multiple Haskell shops. They've all had problems. None of the problems were actually attributable to Haskell..but Haskell was an easy target for blame by management. I'd say that's Haskell's biggest problem in a professional setting.

I know Haskell and Go about equally. Meaning I know all the language features & common libs & have a grip on their runtimes. Go on day 1 was way easier to write and understand than Haskell on day 1. Now that I've normalized their learning curves, Go didn't get much easier to work with. Haskell did - I use my brain way less when writing Haskell than when writing Go.

But even then, Haskell or Go. It's all just the same stuff.

I've pretty much given up talking on the Internet to people about Haskell. Arguing against some of the pts in this thread about how Haskell isn't good for production.

I'll continue to write Haskell for pay for the foreseeable future. If I'm lucky, I'll do it the rest of my career. I don't see any reason why not.


> ..but Haskell was an easy target for blame by management. I'd say that's Haskell's biggest problem in a professional setting.

I would say that's a software company's biggest problem instead. I really don't understand how someone with little or no coding experience can be in a leadership position with other programmers who are supposed to be problem solvers and have all the details of how to build something. The bigger the gap in knowledge, the more communication breaks down and the hierarchical structure becomes useless.


> someone with little or no coding experience can be in a leadership position

I had one where the leader in question had coding experience but didn't know Haskell at all.

Despite this [1], they tried to read some code (that the Haskellers has no issues with) and couldn't, so he deemed the codebase unreadable and eventually called for a rewrite. Worse, during the debate about the rewrite, there was constant discussion about how the code was unreadable and bad, but it was never sourced to this leader. Instead it was asserted with weasel words only.

[1] Maybe it was because of this..this leader was driven by confidence in their experience and seniority.


This comment pretty much sums up my own experience, including learning Go, then Haskell, and the final takeaway - have pretty much given up talking on the internet about Haskell.


Python may be terrible, but this poster doesn't know Python at all. Here's how to fix the first snippet:

    leaf = object()
Huh, that's funny, it's shorter than the Haskell? Why is that? Let's keep going.

    def merge(lhs, rhs):
        if lhs is leaf: return rhs
        if rhs is leaf: return lhs
        if lhs[0] <= rhs[0]: return lhs[0], merge(rhs, lhs[2]), lhs[1]
        return rhs[0], merge(lhs, rhs[2]), rhs[1]
Ugh. My mouth tastes funny. Livecoding on this site is always disorienting. I need to sit down for a bit. Exercise for the reader: Continue on in this style and figure out whether the Haskell really deserves its reputation for terseness and directness.

Edit: I kept reading and was immediately sick. __dict__ abuse is a real problem in our society, folks. It's not okay.

    def insert(tree, elt): return merge(tree, (elt, leaf, leaf))
The Haskell memes are growing stronger as I delve deeper into this jungle. The pop-minimum function here is a true abomination, breaking all SOLID principles at once. I can only imagine what it might look like in a less eldritch setting:

    def popMin(tree):
        if tree is leaf: raise IndexError("popMin from leaf")
        return tree[0], merge(tree[1], tree[2])
We continue to clean up the monstrous camp.

    def listToHeap(elts):
        rv = leaf
        for elt in elts:
            rv = insert(rv, elt)
        return rv
The monster...they knew! They could have done something better and chose not to. They left notes suggesting an alternative implementation:

    def listToHeap(elts): return reduce(insert, elts, leaf)
Similarly, if we look before we leap:

    def heapToList(tree):
        rv = []
        while tree is not leaf:
            datum, tree = popMin(tree)
            rv.append(datum)
        return rv
And again, the monster left plans, using one of the forbidden tools. We will shun the forbidden tools even here and now. We will instead remind folks that Hypothesis [0] is a thing.

Haskell's an alright language. Python's an alright language. They're about the same age. If one is going to write good Haskell, one might as well write good Python, too.

[0] https://hypothesis.works/


I really wasn't trying to compare Python to Haskell, rather I was trying to show a few example features in Haskell with the Python code as a reference for the "standard" way to do a binary tree type thing. Other than the (admittedly awful) `__dict__` stuff, the rest of it is pretty standard. In contrast, the code you've written here is non-mutating, and uses tuples to represent a tree. If you were to google, say, "BST in Python" I'd wager almost none of the implementations would follow that style. If I was to write a skew heap in Python (that I intended to use), I would likely do it in a non-mutating way (although I certainly wouldn't use tuples and `leaf = object()`).

The point of the post was really to argue that simple features like pattern matching, ADTs, and so on, should be in languages like Python and Go. Also I wanted to make the point that functional non-mutating APIs could be simple and tend to compose well: the `unfoldr` example was all about that. In that vein, it was important that I compare the Haskell code to an imperative version.

For instance, with your `reduce` improvement: I agree that the `reduce` version is better! It's simpler, cleaner, and easier to read. But Python these days is moving away from that sort of thing: `reduce` has been removed from the top-level available functions, and you're discouraged from using it as much as possible. The point I was making is that I think that move is a bad one.

Finally, while the Python code here is shorter, you still don't get any of the benefits of pattern-matching and ADTs.

* You can only deal with 2 cases cleanly (what if you wanted a separate case for the singleton tree?).

* You are not prevented from accessing unavailable fields.

* You don't get any exhaustiveness checking.


Python has some basic pattern matching. ADTs are alright, but if you notice that MLs implement them by tagged unions, then really this is a request for syntax and ergonomics, not semantics.

Python is untyped. This fundamental separation between Python and Haskell is non-trivial, and can't be papered over. Your complaints about exhaustiveness, field existence, and case analysis are all ultimately about the fact that Python's type system is open for modification, while Haskell's is closed; in Haskell, we can put our foot down and insist that whatever we see is an instance of something that we've heard of, but in Python, this is simply not possible.

I agree, when it comes to Python's moves. I am about ready to leave Python 2, but I'm not going to Python 3.


While I am all for stronger type systems, I don't agree that you need it to do sum types. We can already do one half of ADTs (classes ~= product types), I just want the other half!

In my mind, the syntax would be something like this:

    sum_class Tree:
        case Leaf:
            pass
        case Node:
            data: Any
            left: Tree
            right: Tree

    def size(tree):
        case(Tree) tree of:
            Leaf:
                return 0
            Node(_, left, right):
                return 1 + size(left) + size(right)
A combination of data classes and pattern matching.


So `tree` is either a `tuple` or an `object`? Seems pretty terrible in terms of typing.


The original was not better. More importantly, recall that Haskell's type system is unsound, so it's not like you can trust your Haskell code either. Just because it type-checks does not mean that it works. In either language, you'll want to write some tests. I mentioned Hypothesis for Python; for Haskell, there's also QuickCheck.


Haskell's type system is unsound? I don't know about this and if it's true then it is some rarely used case. Don't throw away types altogether because of some rarely used case... Types are very very useful.


Pick literally any type. The value `undefined` proves it, even if it shouldn't be provable. Magic~


I agree with your criticisms, the Python version presented in the article is rather baroque. In particular, the use __dict__ seems to have little justification and serves only to "uglify" the Python.

Here's my version of skew heap in Python 3, edited down to just the essential operations. (Link to slightly fuller version: https://gist.github.com/olooney/97643d07d69d22015ae5bb70c121...). It seems about as clean as the Haskell version presented, mainly because I used None to represent leaves (which is quite Pythonic.) Relative to the code presented in the article, it also benefits from a clear partition of trees of immutable nodes on the one hand, and a class to represent the mutable heap interface on the other.

    from typing import NamedTuple, Any, Optional

    class Node(NamedTuple):
        """A single Node in a binary tree."""
        value: Any
        left: Node
        right: Node

    def merge(p: Optional[Node], q: Optional[Node]) -> Node:
        """
        Implements the critical "merge" operation which is
        used by all operations on a SkewHeap. The merge operation
        does not mutate either tree but returns a new tree which
        contains the least item at the root and is in heap order.
        The resulting tree is not necessarily balanced.
        """
        if p is None: return q
        if q is None: return p

        if q.value < p.value:
            p, q = q, p

        return Node(p.value, merge(p.right, q), p.left)


    class SkewHeap:
        """
        A SkewHeap is a heap data structure which uses an unbalanced binary tree to
        store items. Although no attempt is made to balance the tree, it can be
        shown to have amortized O(log n) time complexity for all operations under
        the assumption that the items inserted are in random order.
        """   
        def __init__(self, items=tuple()):
            """
            SkewHeap() -> new, empty, skew heap.
            SkewHeap(iterable) -> new skew heap initialized from the iterable.
            """
            self.root = None
            for item in items:
                self.push(item)

        def push(self, value: Any):
            """Add an item to this heap."""
            node = Node(value, None, None)
            self.root = merge(self.root, node)

        def pop(self):
            """Remove the least item in this heap and return it."""
            if self.root is None:
                raise ValueError("Cannot pop empty SkewHeap")
            else:
                value = self.root.value
                self.root = merge(self.root.left, self.root.right)
                return value

        def union(self, other: 'SkewHeap') -> 'SkewHeap':
            """Return a new heap which contains all the items of this and another heap combined."""
            ret = SkewHeap()
            ret.root = merge(self.root, other.root)
            return ret

        def __bool__(self) -> bool:
            """Return true iff the heap is non-empty."""
            return self.root is not None

    def test():
        h1 = SkewHeap()
        for item in [42, 13, 50, 11, 14, 50, 91, 72, 91]:
            h1.push(item)
        h2 = SkewHeap([63, 15, 1, 22, 91, 11, 92, 99, 93])
        h = h1.union(h2)

        while h:
            print(h.pop())
Maybe someday I'll read an article comparing two languages where the author's greater familiarity with one language over the other isn't the dominating factor, but it won't be this day.


Your recursive merge function will blow the stack for large inputs and it will be slow.


I got curious about Haskell some years ago because of a complete career disappointed with other languages. Ive put out some semi big apps in this language, that have been working well in production. I code for fun and Haskell is the most fun. Sure, there has been a couple of problems with tooling and its hard to find best practices, but I seem to be able to live with. I get a little depressed when having to work with other languages. For me its the end game.


I think python was a poor comparison here. The claim "python doesn't have algebraic data types" doesn't really make sense. Algebraic data types let something be an X or a Y. In python, everything can be an X, or a Y, or a Z, or anything else. In the tree example, an ideomatic tree structure in python would just be a 3-tuple (left, right, value). There's no need to define anything.

Don't get me wrong, I'm all for algebraic data types and think every statically typed language should have them, but it's nonsensical to talk about them in the context of a dynamically typed language.

The author commented elsewhere on this page that the article was mostly a response to missing these features in golang - I think that would've been a much clearer comparison that showed what you were really missing.


One thing that makes programming difficult is how much context needs to be held in the live part of the brain. Past N items (let's say 4), the brain has to swap those values and it makes programming much much slower.

Most of the article is spent explaining this sideways; with Haskell you need to hold less aspects in the head because they are either eliminated entirely (purity) or can be deferred to the compiler (types), ... What we learn is that Haskell is easier to implement tree-like structures.

I think this is what most language proselytes are trying to convey in their articles. But don't talk about explicitly. Inevitably they will pick a task that is easy to achieve in the language because the developer environment aligns well for that use-case, and then let the reader infer that this applies to all programming tasks.

Basically we are trying to benchmark humans, without building a proper model of how humans work. The next evolution in programming will be done by properly understanding how we work and interact with the computer.


My Haskell isn't good enough to translate, but I'd love to see these examples in Rust. I believe Rust has all of the Haskell features mentioned in this article, but with a much more familiar syntax.

The tree data type in Rust could be:

    enum Tree<A> {
        Leaf(A),
        Node(A, Box<Tree<A>>, Box<Tree<A>>),
    }


Why should

    enum Tree<A> {
        Leaf(A),
        Node(A, Box<Tree<A>>, Box<Tree<A>>),
    }
be more "familiar" syntax than

    data Tree a
      = Leaf
      | Node a (Tree a) (Tree a)

?


I know what enum is, but data for me is just 1 and 0.

Are we assigning here with the =? What does the pipe symbol mean? Why pipe instead of another =? Why the weird formatting?

To me Haskell's look is too off-putting. With the Rust example I have a good guess what will the resulting object will look like.

But I know it's just a learning experience. Once I would know Haskell your example looks more elegant to me. I just don't get it from looking at it :)


Your questions are probably mostly rhetorical, but I'll give a brief answer to them.

> Are we assigning here with the =?

Kindof you are assigning what the type `Tree a` is.

> What does the pipe symbol mean?

The pipe is symbolizing or/either here. A tree is either a Leaf or it is a Node with two subtrees.

> Why pipe instead of another =?

You are building up a single type with the pipes, having an extra = wouldn't really make sense when you are thinking about building a type algebraically.

> Why the weird formatting?

The formatting is optional. It is free to be all in one line if you want it that way. For the given example, I would probably make it a single line, but I'm not a Haskell veteran.


The formatting is no weirder than Python's.

You could ask many of the same questions of Python's non-C/non-Java like syntax:

- What is "def"?

- Why the weird formatting? (And unlike Haskell, Python's tends to be stricter!)

- Why do I need to write ":" after some lines but not others? It doesn't work like the semicolon in C-like languages!

- What's with the "if __name__ == '__main__'" weirdness I see in some Python programs?

- What's this weird [f(x) for x in ...] syntax? It doesn't look like anything in C. What's with the brackets anyway?

Etc.

Yet Python with its "weird" syntax and constructs is a hugely popular language...



Python is more popular than Haskell, which is why it's easier to google.

Do note Haskell tutorials and communities abound, and you have excellent online tools such as Hoogle (in which you write the type of what you think you want and it responds with "these are functions with a similar type signature, with their documentation"). It's easy to google Haskell things, just not as easy as googling Python things :)

Do note the type definitions from the example are Haskell 101 and will be covered very early in almost every tutorial, for example Learn you a Haskell.

PS: it's not a "pipe operator" you're looking for. This isn't an operator at all! The "|" you're looking for it's in a definition, and it means a union of alternatives (this type can be "this" or "that" or "this other thing"). If you think about it, this "union-or" is written the same as the bitwise-or from more popular languages :)


Hence Eich’s famous « I was under marketing orders to make it look like Java » :)


Well, the marketing folks were right after all :)


Haskell: A Tree ="IS" a Leaf |"OR" a Node

In the Rust syntax ',' means both OR and AND.

A Tree {"IS" a Leaf ,"OR" a Node

A Node ("IS" an A ,"AND" a Box ,"AND" a Box.


Because everyone knows what an enum is, and that <A> will be a generic type, from first glance. There's nothing to guess, apart from Box being some kind of pointer abstraction.

Looking at the second definition it's not immediately apparent what 'a' is and "Node a (Tree a) (Tree a)" seems just like a bunch of words concatenated by spaces, it has no apparent structure or meaning, unless you're used to writing Haskell/ML/Lisp/etc.


> Because everyone knows what an enum is, and that <A> will be a generic type, from first glance

That's false. A programmer coming from Python or Go won't know this. Nobody who hasn't been exposed to the extremely arbitrary generics syntax in Java-like languages will know about <A>.


One needs to know Rust to actually understand that Box is the Rust's way to do heap allocation.


There is a huge unreadability right there staring at me: What does that Box<> do, and why? Rust is borrowing more and more of the obscurities of C++, and thats not a good thing...


There's nothing unreadable (and certainly nothing obscure conceptually) about 'Box<>'. It's just unfamiliar if you don't know Rust. But we'll get nowhere fast confusing readability with familiarity.

I'd be interested to know if anyone has done interesting conceptual and/or empirical work on readability. It seems like a very slippery and difficult concept to me. Readable to whom? Readable in the small or the large?


Readability and familiarity are not the same but closely related. There is nothing inherently more or less readable in the rust or haskell tree example.

I think there is work on "readability", just in another context: Its called typography and orthography. And I think the gist of it is: Do it like everybody else does, first and foremost, strange and unfamiliar equals unreadable.


> I think there is work on "readability", just in another context: Its called typography and orthography.

I think that's a very different case. Maybe some analogies might be drawn between some of that work and some of the lower-level aspects of reading code (related to syntax noise etc), but code readability, if it's a defensible concept at all, is a far more complex and layered phenomenon than letter & word recognition.

The first thing a researcher would need to establish is whether or not readability even exists as a natural kind apart from familiarity. I don't know the field, so this might already have been pursued somewhere.


> There's nothing unreadable [...] it's just unfamiliar if you don't know Rust

Agreed. Note the same applies to Haskell's syntax :)

People confuse "readable" with "based in my knowledge of Java and C, I can't make head or tails of this notation without reading a tutorial first", which in my opinion is not a sensible conclusion.


It's not entirely sensible, but it is understandable. I don't think most programmers are truly aware of how much they know, and how deeply automatic their recognition of programming constructs has become.


Box puts something on the heap and returns a pointer to that thing. Here it is necessary because otherwise the Rust compile wouldn't be able to determine the size (in memory) of a Tree.


Box<T> is a type, so it’s confusing to say it returns a pointer. It holds a pointer.

GP should note that C++’s unique_ptr isn’t an obscurity.


Calling it Box is confusing. unique_ptr would be an improvement for the name, or maybe HeapRef. The problem is exactly that rust chose a misleading name Box (boxed types are something entirely different in most languages) instead of the obvious C(++)/Java-like _ptr, ref, * or & notation/convention


Boxed types in Java are basically the same thing: a heap allocated version of an otherwise stack-allocated type.


I thought the name was pretty clear; when I saw it in some list of different kinds of Rust pointers, I knew what it was immediately.

It doesn't matter if some people are confused, because you can just explain what it is in 3 seconds. What's important for such a ubiquitous type is that the name is short.


> instead of the obvious C(++)/Java-like _ptr, ref, * or & notation/convention

That would be very misleading since Box represents a heap-allocated owned value


The syntax and terminology (enums, structs, etc.) is more familiar to C++ / C# / Java users.


I always found the use of "enum" for things that are not really enumerable in a useful way to be very confusing. Or are Rust "enum"s enumerable in some subtle way that I don't recognise? Is it just some vestigial term that now has no relation to its original meaning? At least in C, "enum"s are enumerable because they are just integers.


Rust's "enums" look more like tagged unions to me. I guess the tag is enumerable? Although I also don't understand why Rust called tagged unions "enums."


They enumerate a finite set of disjoint cases, so in some sense they are an enumeration. But the real reason is, of course, that sum types can be seen a generalization of C enums, so the syntax was chosen to maximize familiarity.


Cases aren't values, though. a,b,c is an enum type over enumerable data. int, char, float is an enum kind of enumerable types.


They enumerate a possible set of valid values. Hence “enumeration.”

They are tagged unions, but sometimes, the tag doesn’t exist. Or rather, invalid parts of values can be used so that the tag isn’t an extra bit of data, but instead is built into the same space. “Tagged union” gets too deep into only-mostly-accurate implementation details to be a good name.


They seem to enumerate a possible set of valid structures which can hold arbitrary values. I guess it's just so different from C enums I'm having trouble understanding why the name was repurposed. It's probably less different from C++/C#/Java enums (I know at least one/some of those languages have more complicated enums than C).

Sure, tagged union implies a particular implementation that may not always be required, but it's conceptually easy to understand and doesn't have the historical baggage. (Maybe a better description would be strongly typed union? I want to make clear I'm unfamiliar with Rust and just guessing based on the syntax presented.) I think the biggest problem with "tagged union" (or the even longer "strongly typed union") is that it just isn't a good keyword name — it's two (or three) words and fairly long. No one wants to type out 'tagged_union' and from that sense, 'enum' is better. I don't have a better suggestion for you, and IIRC Rust 1.0 has now frozen the language to some extent.

Thanks for trying to explain, I appreciate it.


They can be any data type; just a name, a struct, a tuple struct, or even another enum.

I think also, likewise, “union” sounds strange unless you have C experience. Many of our users do not know C, and so that name doesn’t help them either.

In the end names are hard.

Happy to, you’re very welcome :)


C, or a little set theory. :)


> In the end names are hard.

Indeed! Thanks again.


I would say the second one is more intuitive to me, it reads like I have a Tree type and it can be either a Leaf or a Node.

The rust example to me isn't immediately obvious that it's an either / or situation other than that must be how an enum works (I know enums from other languages)


Literally the punctuation ("<>{}()") and lexical structure is more familiar to anyone who's written any C-family language (C, C++, Java, ...). I say this as a C programmer with more or less equal (near-zero) Rust and Haskell experience.


Because of the memory safety in rust, it’s now clear those left and child nodes could be missing since there’s no null in rust.


Given that GP already stated they don't know enough Haskell to translate all the examples, it seems pretty clear to me that by "familiar" they mean "in a language I'm familiar with".


There's no reason why it should be more familiar. But it is more familiar to the huge number of developers who are used to ALGOL/C family languages.


The word “should” can also be “used to indicate what is probable” (according to the OED). I think that's the way GP intended to use it. As in, "why is it probable that the syntax is more familiar?"


    Node(A, Box<Tree<A>>, Box<Tree<A>>),
is obviously nested (a Node contains an A and two Trees) and the trees are optional (they are contained in a Box, whose purpose is obvious without even knowing the language)

    | Node a (Tree a) (Tree a)
might or might not be nested, given the cheerful taste for currying and juxtaposition without punctuation that prevails in Haskell syntax, and it isn't obvious what purpose the parentheses serve (just grouping ?) and whether the trees are optional.


To be obvious that it’s nested, you need to know that < and > are used as approximations for ⟨ and ⟩. You need to know that they’re brackets, not operators.

You need to know that “enum” means that commas in the next section are different than usual, but only one layer deep—check nesting carefully. You need to know about type parameters in either version.

I had a similar experience learning my first ML: I couldn’t even tell how many words each word would gobble up, because I didn’t know the reserved words get. Syntax highlighting helped, and it’s not a problem after a week or two. It’s no worse than figuring out what’s a binary vs unwary operator in C and its descendants.

Also: it’s not obvious to me that Box means optional/nullable. I’d expect it to mean a required non-null pointer to a heap element.


Children of a non-leaf tree node must be optional because otherwise the node would be forced to have both children. It's therefore obvious that Box means optional; otherwise there would be a bare Tree<A> to represent a mandatory reference.


I don't know Rust (though I'm familiar with Java & C) and there's nothing obvious about Box. In fact, I'm just learning from your comment that it means optional.


(Box does not mean optional, the parent is wrong. Option is the type for optional. Box is basically a mallloc, placing a value in the newly allocated memory, and then a call to free automatically when it goes out of scope. The box itself is a pointer to the heap. It’s not allowed to be null, in some sense, the opposite of optional.)


How is the Haskell example not "nested" according to your definition? It contains an "a" and two Trees, just like the Rust example. Nothing is optional. (There might be a Maybe or similar type, possibly hidden behind a type name, but it would still not be optional.)


> given the cheerful taste for currying

Currying makes no sense in type definitions. It's like saying that in Java you aren't sure if "String name" will run something, "given Java's cheerful taste for running things".

To me, it's obvious the parentheses in "(Tree a)" are grouping things, which is the most immediate (and correct) interpretation, but I'll agree this is more debatable.


They are clearly grouping "Tree a", but do they mean that it is optional?


Why would you assume it's optional? In which popular programming language do parentheses mean "this is optional"? When you read a formula such as

    (x + 1) / 2
do you assume part of it is optional?


> but with a much more familiar syntax.

This is actually one of my main two beefs with Haskell:

     . The language is really interesting from a feature point of view, but the syntax, oh man. Looking at Haskell code when you switch back and forth from a more conventional language (C/C++/JS/Java/C#/Python/etc...) is a major headache. I can switch from C++ to Python without a second thought. Not so with Haskell.

     . You can't really predict how something will actually execute, that's left to the language to decide. While in some situations this is a desirable feature, in many, it isn't at all, especially if you're writing performance critical code.


1. The language is really interesting from a feature point of view, but the syntax, oh man. Looking at Haskell code when you switch back and forth from a more conventional language (C/C++/JS/Java/C#/Python/etc...) is a major headache. I can switch from C++ to Python without a second thought. Not so with Haskell.

2. You can't really predict how something will actually execute, that's left to the language to decide. While in some situations this is a desirable feature, in many, it isn't at all, especially if you're writing performance critical code.


Thanks. I wonder why people don't correct their text as soon as they notice how it renders.

> You can't really predict how something will actually execute, that's left to the language to decide.

If this is referring to order of evaluation, it's better to think that it's up to the code to decide, not the language. In a function application/call like:

  f a b c
it is f, not Haskell, that will decide in what order a, b, and c will evaluate by the way in which it uses them. Haskell, the language, merely doesn't force the evaluation of arguments before evaluating a function.


It ends up looking like a halfway point between the two. A little of column A and a little of column B.


Wait, why do the leaves contain no data? What's the point of even adding them then?

With data in the leaves you could easily do something like:

    @dataclass
    def Node:
        data: "Any"

    @dataclass
    def Tree(Node):
        left:  Node
        right: Node
using the new dataclasses module. You could also add a type parameter to the above if you really wanted to.

The pattern matching will need to be done manually though, following python's philosophy of duck-typing.

Edit: An alternative involves abusing the pattern matching that python does have to write things like:

    data,*subnodes = myTree
    for node in subnodes:
       # etc
but whether that's really a good idea is debatable.


> Wait, why do the leaves contain no data? What's the point of even adding them then?

You need something to represent a completely empty tree. You also need something to represent in a Tree node that "there is no child here". It makes sense to use the same thing for both. In Python and other languages with nullable references you can just use None (or null, or ...) for this. But ML-family languages have no nullable references. You could use option types instead, but that would look pretty complex, something like (my Haskell is rusty):

    data Tree a = Maybe (TreeStructure a)

    data TreeStructure a = TreeNode (Maybe (TreeStructure a)) (Maybe (TreeStructure a))
(You should be able to use Tree a in the definition of TreeNode, but I wanted to show the "real" structure, which you would also have to care about when pattern matching.)

It's easier to have an empty leaf instead. Personally I would possibly call it EmptyLeaf or maybe NoTree instead of just Leaf.


At this point we're basically discussing what kind of tree you want. Including if you truly need an empty tree (which is maybe necessary if you want to do Monad like operations that don't return a result, but then doing so on a tree is pretty weird in the first place, and I'm not entirely sure what you'd do if a node in the middle of the tree returned Nothing, do you just throw away its children?). The best design will depend on what you need a tree for as well as in what language you will implement it.

But yeah if you want an empty tree I'd recommend just using None in python. You'll just need checks like 'if node' where you'd otherwise have pattern matching; you could improve QoL a little by defining an iterator over the existing children. If you really want a separate object then you run into all kinds of annoying stuff, including the fact that by default python will allocate separate objects for all of them, which is 1) slow and 2) bad for memory usage.


> if you truly need an empty tree

I'm sure in Python you regularly have uses for empty lists, dicts, and strings. (Probably not empty tuples, but what do I know.) Why would empty trees be particularly strange or exotic?


For what it's worth I prefer to use empty tuples as a cheap empty iterator (empty lists are mutable and not unique, so the empty tuple is a bit nicer).

The concept of an empty tree is not too exotic, but I'm struggling a bit to find a use for it (which is not to say that there isn't one). What makes it different in my mind is that with lists/dicts/strings it makes perfect sense to filter them, which is a bit weirder with trees. It's easy to imagine a scenario where you filter a lists and end up with no items, I'm struggling a bit to figure out what happens if you filter a tree, do you just throw out the entire subtree if one of the parents is filtered out? It seems to me that the answer depends strongly on what you want to use the tree for, and if you know that you will likely also know whether you need an empty tree or not.

Just to illustrate why the 'remove the subtree if the parent is filtered away' option is not obviously the correct one, consider that it also makes sense to just remove the node and return a list of disjointed trees. In that case the 'empty' case is just an empty list of trees.


Filtering any data structure means building up a new filtered copy, not removing from the current one, no?

    >>> x = [1, 2, 3, 4]
    >>> y = list(filter(lambda n: n < 3, x))
    >>> x, y
    ([1, 2, 3, 4], [1, 2])
Same with trees: No removal is needed for filtering, you could implement it something like:

    def filter_tree(f, tree):
        filtered_tree = new_empty_tree()  # it's useful if this is *not* None!
        for element in tree:
            if f(element):
                filtered_tree.add(element)
        return filtered_tree
Regardless, removal of individual, even internal, nodes from trees is of course possible without removing entire subtrees. The details depend very much on the actual kind of tree, but you can start at https://en.wikipedia.org/wiki/Binary_tree#Deletion


An algorithm that builds a tree out of some other data source matching some criteria can return an empty tree if no element matches the filter. Then, algorithms that process trees must take empty trees in consideration. (Yes, you could write "if notEmpty(tree) then f(tree)" in your language, but then you have a partial function f which doesn't work if tree is empty, and partial functions often lead to bugs because we tend to forget about their partial-ness).

Almost every "collection" data structure needs to have an empty representation. Trees are no different.


A leaf is a node that has no children. So you don't need to represent it directly. Just check both children and if they are both None, it's a leaf.

Empty tree can be represented with root node that is a leaf and has no data.


You forgot to include your Haskell code.


I'm talking about how you could do this in Python. Don't know much about Haskell.


The leaves are intended to stand for more complex leaves the actual data type held in the tree is boring, only its properties of nodes and leaves matter. (I think)


To play the devil's advocate here: no post speaks about any serious disadvantages of Haskell.

So, for a programming language that has so much to offer, why isn't it adopted more?

Could it be that it doesn't actually increase productivity to such a degree to justify the cost of change?

Legit question, I am not trying to be a troll.


I can’t say for Haskell, but I can say for Lisp, which similarly gets these “look how great” articles but also similarly doesn’t see a huge up-tick in usage.

Whether you’re a student fresh out of school, or you’ve been unemployed for 10 years as a sysadmin, or you’re an expert programmer already, Common Lisp is accessible to you. Those examples are real; they’re backgrounds of folks I either used to or currently work with. Being paid helps immensely.

So it’s not that the language is unlearnable, unreadable, or out-of-reach. (The pot-shots that random commenters in forums like these take on the language are usually shallow or even outright wrong.) In some cases, it’s even demonstrated to be asymptotically more productive.

So what’s the deal? I personally think it’s just that productivity isn’t in and of itself incentivizing enough. You know Python and C++, you’re relatively proficient at them, you know how to get the job done with them, why learn something new? Haskell/Lisp won’t get you a job (necessarily), it won’t allow you to do something with a computer that you fundamentally couldn’t do before, and it’ll suck up a lot of your time to figure out what’s going on with them. Moreover, there’s no big organization behind it (like Mozilla or Facebook or Microsoft or ...) so where’s the credibility? A bunch of researchers at some university? A bunch of open-source hackers?

I think one has to be personally invested in becoming a broadly more knowledgeable and more skilled programmer, just for the sake of it, and (IME) most people aren’t like that. I think one has to also have a penchant for finding simpler or more fundamental ways of solving problems, and that has to turn into an exploratory drive. Even if one is like that, learning Haskell is one of a hundred possible paths to improve oneself.

My comment shouldn’t be misconstrued as supporting a mindset of it being OK to just know a couple languages well. I think the hallmark of an excellent programmer is precisely this broad knowledge of not just tools, but ways of thinking, in order to solve the gnarly problems that come up in software.


Three answers:

First, it doesn't fit everywhere. The best advice I received is that FP fits best when you can think about your program as a pipe - data comes in, data goes out. The more your program doesn't look like that, the less well FP fits.

Now, Haskell can do non-FP things, but you're fighting against the nature of the language to do so. It's probably better to pick a language that you don't have to fight in order to write your program.

Second (though I'm not sure that this is actually a reason that people pay attention to): Compile time. How long does Haskell take to compile a multi-million-line code base? How long does Go take? (Yes, I know that Haskell may take fewer lines. It won't be enough fewer to make the compile time shorter, though, nor anywhere close.) If you've got a multi-million-line code base, you've probably got a reasonably large team, and you've probably had them for years. If they each compile, say, five times per day, and there are twenty of them, and they've been doing it for ten years, the time spent in compiling adds up to real serious money lost.

Third: For a large team, they won't all be A students. Half the programmers will be below average. I'm not sure that Haskell is a great language for them. I wonder if they can't do more damage with Haskell than they could with, say, Go. (Of course, they could do a lot of damage with C++, too...)


Well, adoption rates of programming languages has almost nothing to do with the features and qualities of the language itself. Swift is big because it's backed by Apple, Go is big because it's backed by Google, Javascript because it's on the web, etc. Python might actually be the only language that succeeded "on its merits" to some extent, and even then it seems more to do with some early success and libraries than the core nature of the language itself.

As the author of the post, I don't actually think I can make a compelling argument for why someone should switch to using Haskell in their day job. I don't have real experience in the software engineering industry, and from what little I do know language choice doesn't make a huge difference.

That said, I think it's valid to say that a given pattern is bad, or another pattern is better. I was trying to argue for that in the post in a couple cases that I think Haskell does well.


> So, for a programming language that has so much to offer, why isn't it adopted more?

In my experience it's usually some combination of FUD echoed throughout these comments such as:

Haskell is an academic language and cannot be understood be mere mortals

Well everyone is mortal and some people have learned Haskell so this is obviously hyperbole and meaningless. Is it difficult to understand? Yes... but I have a theory that this is largely because we aren't taught to think in the way Haskell asks us to. Most of us in industry invariably learn Haskell as a second or third language and by then certain patterns and expectations are present in our brains that we hold as truths.

The tragedy of this argument is that people think you have to understand everything in Haskell in order to get started. They point to more advanced features like lenses and profunctor optics and all of this jargon as proof. However I don't ever recall having to learn the entirety of template metaprogramming to get started in C++.

There's really a pyramid of features and the amount of Haskell you need to know to get started is small.

It is too hard to hire Haskell programmers

It's as hard as you make it out to be. There are plenty of people out there who would love to program in Haskell for their day job. When I started posting jobs for Haskell programmers my queue was full, constantly.

The problem is that unless your organization is fully invested in Haskell people in your organization will find ways to hijack this process in order to make it come true by moving the goal posts. It might be hard to find Haskell programmers in your area so they'll say you can't hire remote developers. Say you'll train people up into Haskell and they'll say we don't have anyone experienced enough, or not enough time, etc, etc.

It's not hard to hire programmers. It's hard dealing with people who don't want Haskell to be adopted in your org.

The documentation is poor, there aren't enough libraries, etc

This may have been true more than ten years ago but it no longer holds. The documentation is phenomenal. It seems rough if you're not used to reading type signatures. However once you are familiar with rudimentary levels of Haskell this disappears fast. Haskell is documented. Whenever you add a top-level signature you're adding documentation that not only enriches the program but documentation that cannot be wrong, go out of date, etc.

I think this one gets bandied around a lot by developers from big ecosystems that have deep corporate pockets to fund the development of frameworks, libraries, and tooling. Haskell has been getting more of that but it's still nothing compared to Java/.NET/Swift, etc.

Regardless there are libraries for every common task and ones that can do things with Haskell's type system that you simply cannot in other languages (or can only emulate, poorly, with possibly buggy run-time code introspections, templates, macros, etc).

The real world is messy and Haskell's type system just gets in the way

The real world is messy so why would you want to use a tool that makes incorrect programs permissible?

This comes up from folks who like how easy it is to get started with dynamic languages which are permissive about their inputs. I like this property of dynamic languages too.

What I don't like is all of the run time inspection I have to do in order to know what data is safe to use. At first this doesn't seem like a problem with unit tests and a tight feedback loop. However at larger scales in a code base it makes refactoring and reasoning about high-level patterns much harder. And despite the claim that "type errors are rarely ever the source of production issues anyway," I still find TypeError: undefined is not a function in logs more often than they'd like to think.

The point of all this is that we ship this code confident that we probably nailed ~85% of the problem and we tolerate the risk that there will be some number of errors that will be reported after the fact. Ship early and ship often. However in practice this sucks up a lot of time as a project matures.

I rather like the experience of not having to chase down where an errant null check as missed or where some code mistakenly mutated a shallow copy. I don't even have to think about that in Haskell. I can focus on the business domain logic.


It’s hilarious to read all kinds of rationalizations for why haskell is not useful.

Servant + Aeson beats any api backend in any language, period. If you combine that with Elm on the frontend you’ve got a great onboarding path for new devs to learn enough haskell to work on the backend.

Of course, for production, ignore lenses, monad transformers, free monads, effect systems, etc. They’re awesome, but the complexity is not worth it in practice at this time.


This article reminds of my coworker, touts Haskell all the time, ends up writing shitty Java code which is difficult to understand and performs like molasses.

Real world is not perfect, it is immutable with plenty of side effects. Using Haskell for day to day messy work is not trivial and should not be considered IMO, u nless you have Haskell gurus all around.

I would rather take a dumb language like Go or Java over Haskell for work code.


> Real world is not perfect, it is immutable with plenty of side effect

Perhaps we should use a language that is immutable and can reason about side-effects. Like Haskell?


offtopic: if anyone's interested -- Standard Chartered Poland is looking for Haskell hackers: https://twitter.com/MikolajKonarski/status/11782723158152192...


tldr: I am NEVER nervous about refactoring some Haskell code.

Good:

After working in a variety of organizations using, typed but also dynamic languages I'm now writing all my back-end code in Haskell. I'm becoming more and more convinced that for multi-year, multi-programmer applications (a language like) Haskell is the only way to make it sustainable, while still being able to add features.

Stephen Diehl has a great writeup on "what he wish he knew when he was learning haskell" http://dev.stephendiehl.com/hask/

It's difficult to say to someone "Just go read books for a couple of months because you need to understand purity, laziness, cross compilation, monad transformers (go read The Book of Monads), 20+ language pragmas. etc etc"

It does however feel like I'm learning useful stuff, and it's a lot of fun to get an executable that runs FAST.


ML seems to be a much more developer-friendly approach to me. Eager evaluation, no problem "escaping" to imperative code, no "purity". All the benefits of functional programming, a good type system, and a great module system.

I'm constantly sad that Standard ML is so outdated. No good tooling, no real unicode support, etc.

At least we have ocaml and F#.


PureScript is an strict ML too and even closer to Haskell than F# and OCaml as it doesn't have the object-oriented bits and is pure (doesn't allow the same level of side effects like mutability as those two without the very explicit 'unsafe' functions).


This is about as far as I got with this...

"While it solves the problem of methods, and the mutation problem, it has a serious bug. We can’t have None as an element in the tree!"

Uhm... what?! Why not? You check that your left and right are None and if they are it is a leaf. And if they are not it is not. What your data value is doesn't matter and you can have as many None values in the tree as you like. Your tree doesn't need leaves to define where it ends, it ends when there are no more branches.


What about the singleton tree with just None in it? As in, what's the difference between `node(None, leaf(), leaf())` and `leaf()`?

Or the following:

             x
            / \
           /   \
          /     \
         y       None
        / \      / \
       /   \    /   \
    Leaf  Leaf Leaf Leaf


This

             x
            / \
           /   \
          /     \
         y       None
        / \      / \
       /   \    /   \
    Leaf  Leaf Leaf Leaf
Looks like this.

             x
            / \
           /   \
          /     \
         y       None
There is no reason to explicitly define leafs


But then wouldn’t the right node with None in it be the same as a leaf?

Could you specify what you mean in code maybe? If you don't explicitly define leaves than how can you represent the empty tree?


here, as I said, there is no need to represent an empty tree

https://gist.github.com/MadWombat/798c6d993a7d2ac4ac74d6624a...


So you just throw an error if you try and sort the empty list?

Anyway, I'm pretty sure what you've said here was addressed in the article. The version you've presented here is exactly the first alternative I showed, which isn't as good as a version written with ADTs because you can't represent an empty tree.


Doesn't have to throw an error, could just return an empty list, doesn't matter. All I am saying is that the problem the article describes, where lack of algebraic data types in python makes it impossible to describe the tree structure doesn't exist. There is no need to explicitly represent an empty tree. It is enough to know where your tree ends. Yes, it is more elegant with the ADTs, but it is not impossible without them.

Also, if I really wanted to implement Leaf type, I would probably do it via __new__ and a bit of metaprogramming.


I think Haskell is conceptually cool, but I just can't stand its symbols and grammars. To me, "=>" means comparison and "|" means an OR, and I'm cringed to see $s and \s in a program code that makes it look like LaTeX. I also prefer the boundary of terms and expressions to be consistently marked up with parenthesis. I know this is just a matter of taste, but sometimes it affects your motivation a lot.


I have had this the syntax is horrid conversation many times. I share your traits of how symbols are seen and mentally spoken clouding the concept. My adversaries argue from positions of clearly complete comprehension of the algebraic intent of these 'words' and they can't understand my confusion. It's like arguing with a native English speaker about illogical spelling, it just gets a "get over it" response.


> "=>" means comparison

In what language? Did you mean ">=" or "<="? They mean the exact same thing in Haskell.

> "|" means an OR

It also means the same thing in Haskell.

  data Maybe a = Nothing | Just a
"data of type `Maybe a` is either `Nothing` OR `Just a`."

  max a b | a > b = a
          | otherwise = b
"max of a and b is either a when a > b OR b otherwise."

  odd x || x >= 5
"x is odd OR x is greater than or equal to 5"

A cool thing about Haskell is that it lets you define new operators. In the parsec package, you get the operator <|>, which keeps the meaning of OR but works with parsers.

  csvCell = csvQuotedCell <|> csvNonQuotedCell
"a CSV cell is either a quoted cell OR a non-quoted cell".

> I'm cringed to see $s and \s in a program code that makes it look like LaTeX

I don't think they're so common that the comparison is valid. $s implies Template Haskell, which should be used sparingly. \s implies lambdas, which are nowhere near as common as the use of backslashes in LaTex.

> I also prefer the boundary of terms and expressions to be consistently marked up with parenthesis.

The way you wrote this, makes me think you prefer to write 2 + 5 * 2 as 2 + (5 * 2). You can do that, just like in any other language. I don't think it's common to add parenthesis redundantly in any language though.

If you actually meant something like preferring f x y to be written as f(x, y), it's not just a matter of taste. It's so the syntax makes sense with partial application. You can do

  f x y = ...
  g = f x
  h = g y
and it would be more confusing to write

  f(x, y) = ...
  g = f(x)
  h = g(y)


> $s implies Template Haskell

I think it's more likely to mean function application ...


At least, I've never seen someone use the `$` operator and put the right operand without any spacing in between. Not only would it cause errors on activating Template Haskell because GHC would interpret it as a TH splice[1], but also it would be weird to read because one tends to use the `$` operator when they have a multi-term expression they would otherwise not want to parenthesize. For example, take a look at this line in Yesod[2]:

  $logInfo $ pack $ show (a, b, c)
The one without a space is a Template Haskell splice and the ones with a space are using the function application operator. If logInfo didn't need to be expanded by Template Haskell and we deactivate that extension, we could write:

  logInfo $pack $show (a, b, c)
But I doubt anyone would because the implied parentheses around those operators are:

  (logInfo) $((pack) $((show) (a, b, c)))
[1] https://downloads.haskell.org/~ghc/7.8.4/docs/html/users_gui...

[2] https://github.com/yesodweb/yesod/blob/c8aeb61ace568cdc2bc81...


Yes, but I suspect euske, being unfamiliar with Haskell, just used an unfamiliar form.


I 100% agree. Haskell is a very sad story in that regard: the language brings tons of cool features to the table only to turn away most of its potential customers because of how ugly and weird it looks.


For those who were wondering about seeing this in Swift. You probably weren't, and I'm aware I could've leaned a bit harder on things that were built-in, but I was basically trying to meet in the middle between representing the Haskell code as written in the interview and something vaguely Swifty.

https://gitlab.com/snippets/1900852



Haskell seems cool, but if I’m going to invest time in a different programming paradigm, I’m more curious about APL/K/J. They seem more useful as well (to me).


Practically speaking what I've heard is that the value of learning Haskell isn't so that you can then go use Haskell, it's so that you can then go write code in other languages as if it were in Haskell


I think that’s unfair. Haskell is used in the real world to solve really hard issues, like anti-spam at Facebook.

I’m simply more impressed by array-oriented languages :)


I have to disagree with the thesis here in its entirety. ADTs and pattern matching are not what makes Haskell good. Every article like this one just serves to show people the non-compelling bits of Haskell. The parts you can use in every other language, if you want to.

The parts of Haskell that make it a good language aren't the things that you can just write a tutorial for. They're about software engineering, not code snippets.

Purity and immutability remove entire classes of bugs caused by spooky action at a distance. When you assert this, people claim "I don't have those bugs", forgetting about the time someone else changed a function they wrote to mutate one of its arguments, breaking code three steps up the call chain.

Parametric polymorphism documents what information a function cannot use within its definition. If you point this out, people ask what good that serves. There's no way to explain how much easier it is to get things done when you can write a function and know that no matter what values are passed in, there cannot be special cases that trip you up.

I see people try to explain why the `Maybe` type is better than null values, and have their explanations rejected with "You still have to check for it. All you're doing is changing the syntax of the check." I've seen variations on that theme in maybe 10 different HN threads over the last 6 years. When all you talk about is ADTs and pattern matching, why would anyone ever look at the bigger impact of the type system? The relevant detail here is that an `Integer` can never be null, not that you use `Maybe Integer` to talk about potentially missing values.

Further in that same direction, I see people say that the `IO` type just complicated things because your program has to do I/O anyway, so it always needs to be in `IO`. This is exactly the same as the `Maybe` problem, but that similarity is even further from being addressed by articles about pattern matching and ADTs. No, the similarity isn't "monads". Anyone who talks about them here has missed the point entirely. The point is that parametric polymorphism completely prevents distinguishing IO values from non-IO values, so code that's not written to work with IO values cannot do IO accidentally.

There are a lot more cases, especially when you get into more sophisticated things possible in the type system using ghc extensions like generalized algebraic data types or higher-rank types.

But all of the reasons you should be using Haskell in reality come down to practical large-scale software design concerns. The language lacks features that make several common classes of bugs possible. It makes several other common classes of bugs take a lot more work to implement than the non-buggy way to solve the same problem. These aren't things you can just write a short article about. They're things that require years of experience and introspection to see are even problems, and a willingness to accept that a lot of the problem is the ecosystem, not an individual failure to execute. None of that fits in an article.

I think articles about how great pattern matching and ADTs are make the language look worse, because anyone with some experience can say look at what's actually happening and say "I can do that in <other-language>, Haskell clearly doesn't have anything to offer." In other words - stop writing these articles. They drive people away from Haskell, not encourage them to look at the good parts.


Honestly, I think it's worth bragging about just how much code re-use and modularity one can get out of combining parametric polymorphism with ad-hoc polymorphism (type-classes).


I agree with that. Foldable and Traversable are complete marvels of usability.


I literally, just this morning, found myself writing some annoying repeated glue-code, and realized I could shorten it up where it mattered by taking the glue and turning it into a type-class.

IMHO, people brag about type-classes too much with respect to the particular type-classes that embody category-theoretic constructions, and not enough about their original application to ad-hoc polymorphic overloading. If I can think of a Task X which I have to do for a variety of somewhat different types in somewhat different ways, but which is used in a polymorphic way, then I can absolutely make a type-class out of that.

It's like how people think the big secret to object-oriented programming is inheritance, but actual OOP experts tell you to prefer interfaces (which are almost-but-not-quite just like type-classes!) and abstain from building large inheritance hierarchies.


I wouldn't know. It was impossible to learn.


>I wouldn't know. It was impossible to learn.

People shouldn't downvote you: the sarcasm bit is funny, and it does actually point out to one of the major reason Haskell is not gaining more acceptance in spite of its many great features:

     . the syntax is completely alien (to the point of turning quasi APL-ish in some cases) and scares away potential users.

     . the community is so focused on esoteric stuff that the "how do I do X that's super easy to do in traditional languages" is completely missing from the conversation.


the syntax is completely alien (to the point of turning quasi APL-ish in some cases) and scares away potential users.

the community is so focused on esoteric stuff that the "how do I do X that's super easy to do in traditional languages" is completely missing from the conversation.


The syntax is not alien. Care to give an actual example that confused you?


The problem with Haskell is that it's smarter than most developers. You can study really hard and maybe you'll be good at Haskell. And then you'll find that there are no common libraries for the things you want to do because all the other developers were writing them in Python.

Such is life.


Being a good person is a goal in itself.

In the same way, Haskell is a good language, it's a moral language. Of course this causes some pain, but it's worth it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: