Hacker News new | past | comments | ask | show | jobs | submit login

> For example, let's say a method should only work with integers larger than zero

I dislike this example. The numeric systems of every programming language I've ever used has been (more or less) terrible, precisely because there are extremely common and simple arithmetic types, just like this, which it's terrible at representing. Half of the "modern" languages I've seen just provide the thinnest possible wrapper around the hardware registers ("Int32"!).

(What if I need to accept an integer in the range 5 < x < 10 instead? Am I supposed to define a new class for every range?)

Instead of saying we need a system of user-definable classes so every user can fix the numerics on their own, for each program they write, I'd say we should fix our numeric systems to support these common cases, and then re-evaluate whether we really need classes.

Are there non-numeric analogues to this type of value restriction? Maybe. It doesn't seem like a very common one, but it is an interesting one. Perhaps what we really want is "define type B = type A + restriction { invariant X }". I can't think of any examples offhand in my own work, but that could be because I haven't had that language feature so I wasn't looking for places to apply it.




I guess it comes down to an unresolved duality.

One being the idea of types as describing shape of data. In the best of cases perhaps some semantics tied to the data (how the bits are to be interpreted)

Than there is the the other view, the Curry-Howard one. Where types describes proofs and invariants of the program it self, and how interesting properties of program compositions can be ensured.

It seems much time is wasted when people holding one perspective debates with people holding the other.

Perhaps we should have separate words for these concepts.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: