Such as? The basic property of overloading is it's open. Any closed set of overloads can be converted to a single function which does the same dispatch logic with ifs and type traits (it may not be very readable).
The point is that comptime isn't dependent types at all. If your types can't depend on runtime values, they aren't dependent types. It's something more like kind polymorphism in GHC (except more dynamically typed), something which GHC explicitly calls out as not dependent types. (Also it's 12 years old [1]).
They are the same thing though. Conceptually there's a partial evaluation pass whose goal is to eliminate all the comptimes by lowering them to regular runtime values. The apparent different "features" just arise from its operation on the different kinds of program constructs. To eliminate a expression, it evaluates the expression and replaces it with its value. To eliminate a loop, it unrolls it. To eliminate a call to a function with comptime arguments, it generates a specialized function for those arguments and replaces it with a call to the specialized function.
One example is a function call that doesn't compile, but will if you inline the function body. Compilation is prevented only by the insufficient expressiveness of the function signature.
The basic problem is when some piece of state changes, all the UI that depends on that state needs to be updated. The simple solution presented in the link is to write update functions that do the correct update for everything, but as the dependency graph becomes large and keeps changing during development, these becomes very hard to maintain or even check for correctness. Also the amount of code grows with the number of possible updates.
Reactive view libraries basically generate the updates for you (either from VDOM diffing, or observables/dependency tracking). This removes the entire problem of incorrect update functions and the code size for updates is now constant (just the size of the library).
But what if your dependency graph never becomes large (HN, Craiglist,...)?
I believe a lot of web applications can go without any reactive framework as using one is a slippery slope. You start with React and 80% of your code is replacing browser features. Imperative may not be as elegant, but it simpler when you don't need that much extra interactivity.
Then you don't need it. Same for if you can do everything (or most everything) with page reloads, or if you don't have any reactivity at all. But the problem is still real, even if people use frameworks when they don't really have to.
Reactive view libraries exist to hide those details. I think the OP is proposing that the benefit of reactive views/state isn't worth the cost and complexity.
It is absolutely worth the cost and complexity. The cost and complexity of building a web application using some home grown vanilla JS system will end up being a horrible engineering decision most of the time.
There have been zero times in my career where I thought "hmm, maybe we shouldn't have build this thing in React and let's just go back to page scripts." If you're building landing pages and websites, then okay. But that's not most of what we're all hired to build these days.
That's way too dependent on context to say the cost is always worth the complexity.
On a team that is experienced in react, or a project that is heavily dependent on client side rendering react (or similar) make sense.
On a team that is more backend focused or a project that is CRUD heavy and generally rendering state that persists on the server, it could very well make sense to lean on server rendered HTML with small bits of JS scripts for interactivity.
We as an industry way over tilted on client-side rendering. If you're building Facebook or Figma or Discord, sure maybe CSR is a must. For most websites you don't need much CSR though, and if you're only using it for small bits of interactivity you may be better offer foregoing the complexity of a framework and taking responsibility for the full render pipeline.
Do you mean subscribing to events/callbacks, manually managing object lifecycle, manually inserting list elements, keeping it in sync with the state, etc, etc. Because that was all friggen horrible. Maybe new approaches could make it less horrible, but there is no way I’d go back to what it was like before React. If anything, I want everything to be more reactive, more like immediate mode rendering.
They still nail "state" to element trees, which creates unbenchmarkable but real update costs. Svelte does better than react, but only within the same paradigm.
Can you describe what you mean by that a bit more? As I understand it, with the new signals-based system in Svelte, updating data directly updates the DOM.
Anecdotally, this is one of those things that's trivially true to some people, but really hard for other people to internalize. I think it's why the "container" can lead people astray- if you haven't internalized the idea of functions as being indexed by their argument, it's a really mind twisting thing to try to make that leap.
One of the fun things about Clojure that reinforces this "trivially true" perspective is that maps and sets are functions:
;; "maps" the keys to the values
(map {1 "a" 2 "b"} (take 5 (cycle 1 2))) ;;=> '("a" "b" "a" "b" "a")
;; acts as a predicate that tests for membership
(filter #{"a" "b" "c"} ["a" "b" "c" "d" "e" "f"]) ;;=> '("a" "b" "c")
Once you get used to this idiom you naturally start thing of other functions (or applicative functors) the same way. The syntax sugar makes for some very concise and expressive code too.
I think that understanding the (moral) equivalence is useful in both directions. In particular, I think helping people understand the "function-as-container" analogy is a useful way for people to understand pure functions- another thing that's conceptually simple but a lot of people struggle to really wrap their mind around it.
I've recently started writing a series of blog posts (https://rebeccaskinner.net/posts/2024-10-18-dictionaries-are...) trying to explain the idea and my approach has been to explain the idea using comprehensions. I haven't had a lot of people review the post yet, and I still have at least one if not two more follow-ups before it's done, so I'm not yet sure how well the idea will land.
Nice introduction. Still not entirely sold that dictionaries are pure functions, though.
Will you be covering common dictionary operations like adding/removing elements and iterating over the dictionary keys?
I have some ideas on how one might frame it in a pure function setting but they all seem quite contorted in a similar way to your incrementDict, ie you'd never actually do that, so curious if there are better ways. Then maybe you'll sell me on the premise.
I'm really focusing less on the idea that Dict the data type with it's associated methods is like a function, and more on the idea that a dictionary in the general sense is a mapping of input values to output values, and you can think of functions that way.
That said, there are some pretty reasonable analogies to be made between common dictionary operations and functions.
For example, adding and removing items can be done with function composition so long as you are okay with partial lookups. Here's a really short example I put together:
module Example where
import Control.Applicative
type Dict a b = Eq a => a -> Maybe b
emptyDict :: Dict a b
emptyDict = const Nothing
singleton :: a -> b -> Dict a b
singleton k v target
| k == target = Just v
| otherwise = Nothing
unionDict :: Dict a b -> Dict a b -> Dict a b
unionDict dict1 dict2 k = dict1 k <|> dict2 k
insertDict :: a -> b -> Dict a b -> Dict a b
insertDict k v dict = singleton k v `unionDict` dict
removeDict :: a -> Dict a b -> Dict a b
removeDict k dict target
| k == target = Nothing
| otherwise = dict k
This particular representation of dictionaries isn't necessarily something you'd really want to do, but the general approach can be quite useful when you start working with something like GADTs and you end up with things like:
data Smaller a where
SmallerInt :: Smaller Int
SmallerBool :: Smaller Bool
data Larger a where
LargerInt :: Larger Int
LargerBool :: Larger Bool
LargerString :: Larger String
someLarger :: Larger x -> x
someLarger l =
case l of
LargerInt -> 5
LargerBool -> True
LargerString -> "foo"
embedLarger ::
(forall x. Larger x -> Smaller x) ->
(forall smallerI. Smaller smallerI -> r) ->
(forall largerI. Larger largerI) -> r
embedLarger mapping fromSmaller larger = fromSmaller (mapping larger)
(I'm actually co-authoring a talk for zurihac this year on this pattern, so I have quite a bit more to say on it, but probably not ideal to cram all of that into this comment).
> and more on the idea that a dictionary in the general sense is a mapping of input values to output values, and you can think of functions that way.
So what's the difference between a map and a dictionary then?
> Here's a really short example I put together
Much appreciated. I don't really know Haskell (nor any other functional language), but I'm pretty sure I understood it.
> This particular representation of dictionaries isn't necessarily something you'd really want to do
Yeah that's pretty much what I had in mind, and yes it's possible but it feels forced. For one you're not actually removing an element, you just make it impossible to retrieve. A distinction that might seem moot until you try to use it, depending on the compiler magic available.
> I'm actually co-authoring a talk for zurihac this year on this pattern
Sounds interesting, will check it out when it's published.
> So what's the difference between a map and a dictionary then?
You're asking good questions and catching me being imprecise with my language. Let me try to explain what I'm thinking about more precisely without (hopefully) getting too formal.
When I say "a function is a mapping of values" I'm really trying to convey the idea of mathematical functions in the "value goes in, value comes out" sense. In a pure function, the same input always returns the same output. If you have a finite number of inputs, you could simply replace your function with a lookup table and it would behave the same way.
When I talk about dictionaries, I'm speaking a little loosely and sometimes I'm taking about particular values (or instances of a python Dict), and other times I'm being more abstract. In any case though, I'm generally trying to get across the idea that you have a similar relationship where for any key (input) you get a particular output.
(aside: Literally right now as I'm typing this comment, I also realize I've been implicitly assuming that I'm talking about an _immutable_ value, and I've been remiss in not mentioning that. I just want to say that I really appreciate this thread because, if nothing else, I'm going to edit my blog post to make that more clear.)
The main point is that dictionaries are made up of discrete keys and have, in Python at least, a finite number of keys. Neither of those constraints necessarily apply to functions, so we end up in an "all dictionaries are functions, but not all functions are dictionaries" situation.
> Yeah that's pretty much what I had in mind, and yes it's possible but it feels forced. For one you're not actually removing an element, you just make it impossible to retrieve. A distinction that might seem moot until you try to use it, depending on the compiler magic available.
This is a great example of the kind of thinking I'm trying to address in the article. You're completely right in a very mechanical "this is what memory is doing in the computer" sort of sense, but from the standpoint of reasoning about the problem space deleting an element and being unable to access the element are the same thing.
Of course in the real world we can't completely handwave away how much memory our program uses, or the fact that a function encoding of a dictionary turns a constant time lookup into a linear time lookup. Those are real concerns that you have to deal with for non-trival applications, even in a pure functional language.
The benefit you get, and I apologize because this is hard to explain- let alone prove, is that you can often end up with a much _better_ solution to problems when you start by handwaving away those details. It opens up the solution space to you. Transformations to your architecture and the way you think about your program can be applied regardless of the specific representation, and it's a really powerful way to think about programming in general.
Thanks for the detailed responses, highly appreciated.
I taught myself programming as a kid using QBasic, and quickly moved on to Turbo Pascal and assembly, so clearly my programming career was doomed from the start[1].
For one I do like to keep in mind how it will actually be executed. The times I've not done that it has usually come back to bite me. But that perhaps hampers me a bit when reading very abstract work.
That said I like going outside my comfortable box, as I often find useful things even though they might not be directly applicable to what I normally do. Like you say, often changing the point of view can help alot, something that can often be done in a general way.
Anyway, looking forward to the rest of the article series and the talk.
Interesting. I liked how "Dictionaries are Pure Functions" set up currying as JSON nested dictionaries.
Curiously, I've a backburnered esolang idea of gathering up the rich variety of dict-associated tooling one never gets to have all in one place, and then making everything dict-like. Permitting say xpath sets across function compositions.
An interesting property is that when you have left and right application, which part of fx is the function and which is the argument becomes sort of ambiguous. You can think of f as something that combines with a B on its right to produce an A, (A/B)B = A. But you can also think of x as something that combines with an A/B on its left to produce an A, (A/B)((A/B)\A) = A.
This is famously used in interpreting the status of "Socrates is mortal" ("Socrates" is the argument) and "Every man is mortal" ("is mortal" is the argument).
Our knowledge of linear algebra tells us that tr is a linear map. Hence, dX -> 2 tr(X^T dX) is the linear mapping corresponding to the Jacobian of tr(X^T X). With a little more work we could figure out how to write it as a matrix.
“Thanks to a fabulous video by 3Blue1Brown [1], I am going to present some of the basic concepts behind matrix exponentials and why they are useful in robotics when we are writing down the kinematics and dynamics of a robot.”