Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What was your "ah ha" moment with Haskell? (reddit.com)
116 points by dons on June 24, 2012 | hide | past | favorite | 59 comments


My "ah ha" moment with Haskell was after a few years of using it quite regularly, I realized that it wasn't actually making me more productive in the kind of code I actually write from day-to-day. It's a lovely language and I wouldn't discourage anyone from using it, but for my purposes I realized it was more exciting than useful at some point, and after that I haven't been back to it as much.


Agreed. It's so beautiful, but so constraining. I'm a much better developer for having used it, but returned to Python/Ruby/Java[for-speed] after spending lots of time with Haskell.

That said, I desperately miss static-typing and Hindley-Milner type inference... I keep searching for the perfect language.


Then I would be interested about your opinion on OcaML/MLTon, F# and Scala. To me they seem like a good balance. If you are more adventurous then try Felix.

EDIT: apparently someone did not like your comment. Some downvotes confound me.


Would that downvotes interested me...

To your questions:

OcaML: seems like a hack-ish [S]ML. There's a nice comparison between SML and OcaML here : http://adam.chlipala.net/mlcomp/ . I like SML's syntax, but OcaML made it too easy to be imperative and seemed too hackish. Most of the OcaML I've seen looks like weird C, but written in OcaML because it's F4ST3R.

F# : I run Linux... next!

Scala* : I can't stand it. I use Python for everyday coding and I really don't like the philosophy behind str(), len(), and friends, but it's otherwise straightforward. There's pretty much only 1 [reasonable] way to do things in Python. Scala seems like the evile lovechild of Perl and SML. Classes and "case Classes"? WTF? Type inferencing, but not powerful type inference? If you're going to move to Java++ without going too far toward Haskell, then Gosu or Mirah seem like better compromises. That said, I'm only investigated Scala [not coded in it], so my griefing is likely due to a lack of familiarity.

SML/MLTon: the syntax is 95% good but they should have embraced significant-whitespace wholeheartedly. Do they really need an "end"? But, generally, I really like SML's thinking. In particular, I'm rooting for SML by following Yeti (https://github.com/mth/yeti; but "case" is closed with "esac", really?!) and Roy (http://roy.brianmckenna.org/). Oh and I hate header files. Sooooo 1995...

Clojure: static typing. I want to believe, but the lack of static typing (including Hindley-Mindler type systems) seems like a short-cut. I think the static/dynamic typing argument is a relic of pre-good-static-typing system and I don't think that big, server-side languages should be dynamically and/or weakly typed. That said, Stuart, and his hair, are great.

Felix: interesting, but I see no mention of type inference, so have concerns about the type system. Also, the wiki is broken and that makes me think "dead project".

But, unfortunately, I want mature toolchains, libraries, etc, so, though I wrote a mid-sized web framework in Haskell, I'm one of those who is waiting for a functional language to emerge as the winner. Until then, I'll work in Python and will support Yeti and Roy.

* I've forgotten where I saw it, but Scala also had some bizarre rules around interpreting variables in case statements [or something] involving the case of the argument. I closed the book at that point. Haskell has special notations for special features, not special assumptions for normal features.


     [Scala] Type inferencing, but not powerful type inference?
In Scala the Hindley–Milner type inferencing is not possible because Scala is OOP, compared to Haskell which isn't.

Ocaml does have Hindley-Milner, but not for the OOP features and whenever I played with Ocaml it felt like 2 different type-systems shoved into the same language (much like Obj-C). Scala on the other hand is more consistent, elegant and simpler. That's why it has "case classes", because case classes are used for algebraic data-types, which have special properties that aren't necessarily shared by normal classes.

And because of implicits, the type-system is also more powerful than what's available in Ocaml and even Haskell. It's arguably too powerful, but the things that the collections library can do have no match in other languages. E.g. http://stackoverflow.com/a/1728140/3280

I also liked to bitch and moan about the lack of real type inferencing in Scala, however in Ocaml and Haskell just because the types are not specified, that doesn't mean you can ignore those types. Quite the contrary, you always have to be aware of those types, as opposed to working in a dynamic language. That's why it's standard practice in Haskell to add those types explicitly for public APIs, because otherwise it hurts readability a lot.


> And because of implicits, the type-system is also more powerful than what's available in Ocaml and even Haskell.

I don't think that's true. You do the same thing (e.g. collections that choose better representations) via type families/associated data types in Haskell. E.g. in [1] or [2]

That said, what is true is that Odersky has modified the Scala type system in quite interesting ways, specifically to support his collections library. [3]

[1]: http://hackage.haskell.org/package/adaptive-containers

[2]: http://hackage.haskell.org/package/accelerate

[3]: http://lampwww.epfl.ch/~odersky/papers/fsttcs09.html


> F# : I run Linux... next!

I feel your pain. The other thing that bothers me is that Mono developer(s?) have been extremely bone headed about tail recursion. Hey guys if you are reading this, sorry about the tone, but that needed saying. John Harrop can be quite a troll but he is right in calling Mono out on this. Their incorporations of futures and promises is indeed an attraction though. But then we only have a non-binding promise from Microsoft that they wont go after other implementations. Thats not a threat I would like to be under.

I really would love a multicore enable runtime for OCaML. I dont even ask for threads, just the ability to run some of the concurrency exposed by the functional semantics be (optionally) run in parallel without forking processes.

Yeti...I dont know, unless JVM includes proper tail calls I will always have this nagging sensation in my brain.

Felix, BTW is far from dead, in fact it is quite the opposite, its too intensively developed. The wiki is very new, so there would be some breakage there. The original website is stable. Oh it uses Hindley-Milner type inferencing.

From your comment it seems you would like the white space thing http://people.csail.mit.edu/mikelin/ocaml+twt/ I have never used it though.

Wonder if there is anything akin to P4Caml for SML to give you significant white space. But I think you will get over the syntax :) I have to work with arrays a lot, there SML is a bit more verbose.


Where do you get the impression that Microsoft's promise is non-binding?

Microsoft's official FAQ on their Community Promise explicitly states that it irrevocable and legally binding [1]. I have previously read complaints that the promise doesn't cover enough of the .NET libraries among other issues [2], but I was not aware until now of anyone claiming that it is non-binding.

[1] http://www.microsoft.com/openspecifications/en/us/programs/c...

[2] http://www.fsf.org/news/2009-07-mscp-mono


I don't think you understand either ML or Lisp. SML would never make whitespace significant because: 1) that's a brain-dead choice, and 2) ML, much like Lisp, is used as a language, as a notation, and as a "kernel" language.

Clojure without dynamic-typing is bath-water without baby. What is the point of interactivity, homoiconicity and macros if the language is statically typed?


> What is the point of interactivity, homoiconicity and macros if the language is statically typed?

If the type system can handle it? MAGIC!


Is there any specific reason why macros can't work in a statically typed language? How does Typed Racket handle this. Or does it not have macros?


Typed Racket is more like type annotations with a checking routine than anything, if you're writing a macro you want generality.

You achieve this by omitting the type annotation and reverting to normal Racket.

You can do macros in a statically typed language, but it would (and does) get ugly fast.


Why do you say "whitespace is a brain-dead choice" ?


I'm sure any downvotes he got (I didn't moderate) was due to his making a fairly strong, and fairly objective claim -- Haskell is constraining -- without saying why it was constraining or even elaborating in what manner it was constraining.


Could you give a concrete example that shows why Haskell is constraining?


> I realized that it wasn't actually making me more productive in the kind of code I actually write from day-to-day

I've got a quip for that in my quotefile:

> "Haskell mainly helps with my C++ template coding when I'm doing money oriented programming" -- fnord123


I've noticed that slowly, but surely, Haskell really is winning. C++ is now basically running as fast as it can to become Haskell. It's such an old language with so much baggage that "as fast as it can" isn't very fast at all, and it has no chance of ever reaching it, but the trendline is clear.

The question the programming community faces over the next, oh, ten years or so, is "Can we get the benefits of Haskell without the strict attention to the type system and without having to rigidly separate IO?" Or a bit more sarcastically/cynically, can we get the benefits without having to fundamentally change how we do business? My gut says no, but I'm open to being proved wrong. (Oh, and yeah that's not the only question, there's others like "What about OO? Can we keep it?", but I think that's really the core question; do we really have to rigidly control our side effects or can we keep our sloppy side-effect usage? Everything else is either incidental next to that, or flows from it.)


The answer is no. Marking side effects is the one thing, probably more than any other that makes haskell awesome for building applications. For instance STM is awesome in Haskell because it is easy for the compiler to see any and all side effects.

The real thing that would be cool would be a strict haskell with optional laziness.


As I said, my gut agrees with you, but I think that given the relative newness of this idea, that the programming community at large should be given some time to make the case that the benefits can be obtained without so much of the cost. There's only a bare handful of languages that have even taken a serious swing at it, like Clojure and lately D. I want more evidence before I call it.

Probably the most promising approach that might salvage conventional programming approaches is a process-based model like Erlang or Go, where each process is internally mutable (mostly unlike Erlang, though it does have the process dictionary), but strictly segmented such that one process can not mutate another's state. This hybrid approach might be viable, and while it's not exactly business-as-usual, it's not as far a trip as full-on IO isolation. (Still, that affords just slamming everything in one process, at which point you don't win much. Will be interesting to watch Go's ecosystem develop and see if goroutines manage to become something deeply and pervasively used in all libraries or a thing occasionally used when the situation is desparate.)


> mostly unlike Erlang, though it does have the process dictionary

Yes and no, while Erlang structures are not mutable (aside from the process dictionary, and the process's message queue), each new iteration of the process loop mutates the process itself, as the process goes from one state to an other.


Erlang values are not mutable. You can't have a 4, lose the execution pointer to another process, and when it comes back you suddenly have a 5 in that variable. (Barring arbitrary C, of course.) That's the aspect of "immutable" that matters from a multithreading point of view. For most of what Erlang does, it would be fine to have mutable variables but immutable values, as if everything were as immutable as a Python string but you could freely reuse variable labels just as you can set a = "A", then a = "B" in Python. I think that's basically what Go does, though I haven't quite studied it enough to be sure.


> Erlang values are not mutable. You can't have a 4, lose the execution pointer to another process, and when it comes back you suddenly have a 5 in that variable. (Barring arbitrary C, of course.) That's the aspect of "immutable" that matters from a multithreading point of view.

Which is of no relevance to Erlang in the first place, since it does not expose threads to the developer.

> I think that's basically what Go does, though I haven't quite studied it enough to be sure.

Go has mutable structures and mutable bindings.


"Which is of no relevance to Erlang in the first place, since it does not expose threads to the developer."

It may not expose them to the developer in the sense that they are available for the developer to directly manipulate, but it is certainly exposed that multiple cores may be running Erlang simultaneously. To the extent that it isn't relevant to Erlang, it is because Erlang has made it not relevant to Erlang by a conscious choice of the developers, not some sort of accident.

"Go has mutable structures and mutable bindings."

Yes, I said that, but can a structure owned by one goroutine be directly modified by another such that a single goroutine can observe that a reference has changed values which the goroutine in question has not changed? Do structures even belong to goroutines? One of the things that turned me off when I looked at it a while ago was that the docs were giving me a hard time answering this question, but I consider it a rather fundamental one. (But this was a while ago, much closer to its beginning.)


> it is certainly exposed that multiple cores may be running Erlang simultaneously.

Sure, but there is absolutely no way that values can change "under your feet" within a process since processes don't share memory. Even if objects were mutable within a given process that would make no difference.

> Yes, I said that, but can a structure owned by one goroutine be directly modified by another such that a single goroutine can observe that a reference has changed values which the goroutine in question has not changed?

Not sure what you mean by "a single goroutine can observe that a reference has changed value". If a structure is not local to a goroutine A (because it was carried through a pipe or was created in a lexical scope other routines can see), then other goroutines will be able to alter it yes, as to whether A will be able to see that, well unless A memoized the old value (by deep-cloning the structure) A will see new values and not old ones.

> Do structures even belong to goroutines?

Go has no built-in concept of ownership, so that question is a bit tricky. Structures are only exclusive to a given goroutine if they are not visible by or shared with other routines. So if the structure in question has been created in a scope making it visible by multiple goroutines, or it has been carried across a channel to an other goroutine, then other goroutines will be able to modify it from "under your feet".


For all my love of Clojure, having used its STM implementation I can wholeheartedly agree with this.

Having side effects in a transaction occur more than once because you didn't pay attention is a real pain to debug, mostly because the transaction won't be retried until the program is exposed to heavy load leading to serious memory contention.

At that point, debugging concurrent designs turns into a nightmare, not being any better than having to deal with deadlocks etc, the very thing STM is supposed to magically make go away. In some cases I even had to resort to locks because it was easier to handle IO that way.

Now, Haskell's type system would just not allow having any sort of IO side effects inside a STM transaction. Yes, having to explicitly handle IO may sound as a lot of work, but in my experience it makes everything easier.


What's your go to language now?


I trust that you've read the excellent essay "And then there's Haskell..."

http://www.xent.com/pipermail/fork/Week-of-Mon-20070219/0441...

(Edit: After further research if you were on Hacker news mid january last year this would in fact be old news, sorry for the unwitting repost)


Off topic, but my "ah ha" moment with both Clojure and Lisp was this blog post by John Lawrence Aspden:

http://www.learningclojure.com/2010/09/clojure-faster-than-m...

He managed to get a statement to go as fast the JVM could possibly go, and he did this by getting the code to write code (the code added type castings to every variable, which apparently gave the JVM the info it needed to optimize like crazy). And there is no way to do that without hard-coding, and if you don't know what kind of data you are going to get, then obviously there is no way to hard-code anything. In other words, this kind of stunt can only be done in a language that allows this kind of code-that-writes-code.


>* In other words, this kind of stunt can only be done in a language that allows this kind of code-that-writes-code.*

On the other hand, this kind of stunt will still be a stunt, not a common use case of the language.


There were far to many 'ah ha' moments. So, I'll take one that I don't hear too often ;). Suppose that you are working on FFI code:

    do
      x <- malloc
      poke x myCDouble
      return x
My C reflex was: 'I have to specify how much memory I want to allocate, but malloc doesn't take an argument, what the heck?'. Obviously, since Haskell has proper type inference, it can deduce that x is a pointer to a CDouble, and has no trouble allocating the proper amount of memory. But for a moment I was thinking it can read my mind :).


I studied haskell at university, but there was very little explanation of why it would be useful or the fundamental difference between it and a standard iterative language (or maybe there was and I skipped that day).

At that point I assumed it was simply a language invented by academics in order to torture undergrads.

It wasn't until a bit later and playing around with things like python and Javascript and using closures/lambdas that I realized I could use some of the functional ideas I had drilled when doing haskell to write simpler code.

Now when I go back to Java I often get frustrated and the amount of code I have to write simply to work around the fact that functions are not first class objects.


> Now when I go back to Java I often get frustrated and the amount of code I have to write simply to work around the fact that functions are not first class objects.

Frankly, I've mostly gotten over Java's lack of first-class functions and just make do with the boilerplate involved with regular for-loops or with anonymous implementations of interfaces that are just stand-ins for functions, etc. What I really struggle with nowadays is Java's (relative to Haskell's) weak type system. I'm only a novice Haskell programmer, but even so I miss things like Maybe, Either, tuples, etc, which can be really painful to work around the lack of in Java. I've realized I would rather work in a dynamically typed language or a strong statically typed language than the wretched mess that is Java.


Remember, Maybe and Either are not language features in Haskell, but rather library features! Java is totally capable of hosting them. (Though, there isn't any special syntax for tuples in Java).


But if you implemented Maybe in Java, there still wouldn't be any compile-time guarantee that Nothing would be handled, right? At best you'd have something in code that more strongly encourages a certain convention.

Just now I briefly looked at implementations of Maybe in Java and at least one of them fakes this with checked exceptions, while another seems to lean heavily on Guava's Function interface, which is just awkward to use. I'd certainly be interested in seeing alternative implementations, if you know of any that are decent.

Also, a maybe more serious issue is that since it's not idiomatic Java you'd have to do a lot of wrapping around libraries, and hard-selling to colleagues. Admittedly no longer dealing with flaws in Java's type system, but a big part of a language is the community and ecosystem, and in Haskell's case people using it have already bought on to the advantages of stronger types.


There's also no compiletime guarantee that Nothing is handled in Haskell; there are plenty of unsafe partial functions. In a non-total language, you unfortunately have to avoid partial functions without help from the compiler.

You're quite right that programming with these kinds of types in Java introduces you to a whole new kind of Hell! I'm not sure I'd recommend it.

However, I'm simply pointing out that types like these are _not_ language features. Some languages may be better suited to them than others, but they are definitely library features, and had best not be considered otherwise.


Yes, but without pattern matching and a do-like syntax sugaring they are close to useless, unfortunately.


> Java is totally capable of hosting them.

No, because your Maybe or Either reference may still be null, so you get "Maybe fuck you" and "Either fuck you, or fuck you". That's the first part.

The second part is, due to the lack of match completeness check (or more generally match) and the shitty type system, APIs forcing developers to safely unwrap or rewrap values are... shitty.


Whilst you're mostly right about the second bit, your first complaint really isn't valid, since in Haskell, every value is basically "My Type + Fuck You" (because codata isn't distinguished from data).

fuckYou :: a

fuckYou = fuckYou

justTrolling = Just fuckYou

That said, however, I'm very sympathetic to the rejection of non-total languages. But we must note that even though Haskell is inconsistent as a logic, it is still able to derive tolerable advantage from the use of option types, etc.


I have yet to have my big ah-ha moment for how to do what I do (scientific computing) in Haskell.

I recently found myself needing a proof-of-concept implementation for solving a bunch of big tridiagonal matrices in parallel using MPI. I thought to myself "here's an opportunity to use Haskell!", but I must confess I'm rather stumped for how one goes about allocating some memory, banging on it, communicating a subset of it to another processor(s), reading a buffer from the other processor, and then banging on the memory I allocated before some more based on what I got back from the other processors(s).

Does one actually attempt to control the machine with this level of granularity with Haskell? Can one actually get any mileage out of the type system doing this sort of thing? Or am I just trying to fit a square peg in a round hole?


In my experience, Haskell is not yet a great language for numerically intensive computing.

I'll explain in a bit, but before I do, let me first address your question about "can I bang on bits?". Yes, you can allocate memory and do all the low-level hacking you please in Haskell. It's not really any harder than in C, although the notation is different and that throws people. But because this is a very imperative way of programming, it's also not going to be any faster than C (typically it'll be a bit slower).

There are even MPI bindings for Haskell, and they look pretty much the same as for other languages (i.e. very low level).

If you're just foontling around imperatively in big homogeneous arrays and sending messages, then the type system really won't do you any good, and you'll rightly find yourself wishing for the notational convenience and speed of Fortran 95.

You could use immutable arrays (the Vector type is your friend) and higher order functions instead, and thereby benefit from Haskell's rather nice parallel evaluation support with only a little effort. This is quite practical, and can lead to pretty code that runs quickly.

Where all the fancy type-related machinery comes into play for numeric code is still largely a matter of research. There are interesting projects underway for parallel programming (both on CPUs and GPUs) that rely heavily on the type system. They're not obviously useful for real work yet, and since they rely on advanced type system features, neither are they something you just pick up and use as a newbie. Nevertheless, I think they're pretty cool projects, and I have been watching them for a few years.

So while there's a lot of interesting stuff going on, the current state of affairs is somewhat mixed. You might enjoy learning your way through it, though; there are many rewards to the path.


> There are interesting projects underway for parallel programming (both on CPUs and GPUs) that rely heavily on the type system

Are you referring to Accelerate[1]? And isn't Repa[2] stable and usable?

[1]: https://github.com/AccelerateHS/accelerate/ [2]: http://repa.ouroborus.net/


Unfortunately, repa and vector are sometimes a few times slower than counterparts in C or C++ for the lack of SIMD intrinsics. Thankfully, things are moving forward fast on that front:

http://ghc-simd.blogspot.com/


repa also has some unfortunate lacunae, either in the API or in the documentation.

For instance, one thing I tried to do but couldn't - apply `scanl` to an array with a piece of intermediate state that is threaded through the computation, save the piece of state at the end, and then apply `scanr` to the same array, using the piece of state from the application of `scanl`.


the key bit is not yet! :) You're highlighting all the right points, and i'm actually spending some time this summer putting the pieces together for a better numerical story with some really cool use cases and aiming for an out of the box turn key experience with numerical algorithm awesomeness.

I think once the pieces are laid out nice and clear, a dramatically better numerical toolchain will be born. :)


The first example of parametric polymorphism, as introduced by a good teacher.

I wasn't even attending the class proper, but taking notes for a deaf student as a paid job.


My "ah ha" moment with Haskell was when I ragequit for the 23rd time and decided that Haskell is probably not for me.


That also what I did when I realized that the "return" function isn't actually for returning values.

But then I came back for the 24th time and got hooked. Now Im doomed forever.


:) What an unfortunate function name. But "pure" isn't much better...


I always thought it ought to be "inject" or something. Return is just outright confusing to novices & just encourages them to believe that do ... return notation has something to do with imperative programming. Which is does of course, but only by the most circuitous of routes.

My embarrassing Haskell moment was how long it took me to realise that I could never get Arrow notation to work because the first argument to an arrow was the "arrow type" bit, so I was always trying to pass the wrong number of arguments to (*) and friends. Took me ages to get over that hump.


I didn't have a single "ah ha" moment, but things became much clearer when I realised that although it was possible to have heterogeneous collections through the use of type-classes, what I nearly always wanted was to create a new data type with a constructor for each behaviour I was interested in encompassing.

The fact that data-types are so cheap, both syntactically and computationally, really frees you up from having to worry about not creating them. I did go too far the other way for a while and created new types for everything. There exists a happy middle ground, but it's hard to define where exactly it lies.


I was converting a somewhat complex Haskell example into Ruby for a presentation and kept running into neat concise Haskell expressions that I could not easily express in Ruby.

It got me thinking that Haskell might be a better Ruby.


Reading the blow your mind wiki was ah ha overload. http://www.haskell.org/haskellwiki/Blow_your_mind


I really like this:

twoK=1:(map (2*) twoK)

I still have a long way to go though - I can't write a lot of it without getting stuck and giving up. It would be really nice if I was comfortable with haskell.


My ah ha moment was when I found this tutorial http://learnyouahaskell.com/ , because was really fun and I learned a lot. Before, I was trying to learn from this: http://www.haskell.org/tutorial/ and was so boring that I almost quit


It wasn't Haskell but another functional programming language (scala). I got my major "ah ha" moment when I did a code review with another experienced functional programmer. It showed to me that functional programming is a different paradigm.


Ha already been said, but my ah ha came when after reading quite a bit about it. I realized I was incapable of understanding it and moved on. :)


Reading won't get you anywhere unfortunately.

The ingenuity of Haskell is doing, writing code. You'll have many 'ah-ha!' moments after that.

Also I think the major problem with Haskell is A LOT of the online content out there is primarily aimed at academia and quite honestly baffles me too.

If you stick with LYAH and RWH and...just doing code then it becomes a lot more usable and fun to program in


dont' give up! Remmeber your friends.

    Hoogle is your friend

    the REPL is your friend
(typeclassopedia, too, but that's for later). The point is to get into the REPl and start defining the contours of the type system, monomorphism restriction, inference, etc, in your mind


Learn Haskell to see how things could be done. Don't use Haskell for things that should be done.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: