Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That lens is completely useless. If you are equivocating lenses to struct accessors, then you're adding abstractions for nothing. Lenses are good for things like, extracting all unique values out of deeply nested data structures for example.

Say you have two data structures

data Struct1 = Struct1 { someStuff: [ Struct2 ] } data Struct2 = Struct2 { otherStuff: HM.HashMap Text Struct3 } data Struct3 = Struct3 { name: Text }

And you have a list `[Struct1]` and want a `[Text]` of the `names` of Struct3. How do you do this in an imperative language? Manually accessing the fields, for loops, etc.

The van Laarhoven formulation of lens and ekmett's lens library optiocs in Haskell makes this trivial.

`myList ^.. someStuff . each . otherStuff . each . name`

Now, what if you wanted to modify all the names and prefix them with 'id-'? Again.. trivial.

`myList # someStuff . each . otherStuff . each . name %~ ("id-" <>)`

Your formulation is just field accessors. Not composable, and ultimately no better than just using the generated field accessors haskell provides.




One thing that does always strike me about such libraries is that they are often relatively easy to use for common scenarios like that, and the mental model is really not so complicated or abstract as to require category theory or something like that. When you use such libraries, I don't think you are really doing deeply category-theoretic thinking. More so just seeing how the types fit together and maybe thinking of some higher-order functions that you want to combine and apply.

If you take Haskell and remove all the types (e.g. Scheme or JavaScript), you can have all the same abstractions, with basically the same syntax and operational behavior. But then of course the operational behavior is just applying/composing functions in the end... And because there are no higher-kinded types to deal with, category-theoretic language is less likely to sneak in.

I say this as someone who has used Haskell for years and studied category theory / PL theory as well. And I like all of those things, just think it's sometimes not really necessary to get the mental model at all and be able to use those tools.

I think that some code duplication is also not always the worst thing. Just because strings can be "combined" through concatenation and numbers can also be "combined" through addition does not necessarily mean you need one general notion/function to capture both, especially because if you take that to an extreme then you end up with some function that "sqoogles the byamyams" (i.e. the abstraction has become so general that the only way people really understand/use it is by looking at its concrete instantiations).


> If you take Haskell and remove all the types (e.g. Scheme or JavaScript), you can have all the same abstractions, with basically the same syntax and operational behavior.

This isn't generally true, as such a language would have to be lazy by default and the property is more important than having all the types, because certain abstractions that can be expressed "naturally" in Haskell is a byproduct of non-strict semantics. Also it's the property that sometimes helps avoiding worst-case complexity of evaluations - something that eager languages have to live with at all times, no matter the abstractions.

There's a few examples here: https://augustss.blogspot.com/2011/05/more-points-for-lazy-e...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: