Hacker News new | past | comments | ask | show | jobs | submit login

Is there any particular reason why PureScript was designed with a single number type (so no proper ints)? I know that's how JS sees things, but now with a whole proper type system it would seem like we should have proper numbers? I admit I haven't read the entire docs, is there a way to e.g. have the type checker ensure a number field such as a year in a date isn't assigned a non integer?



It has been discussed before: https://github.com/purescript/purescript/issues/396

The reason I have been against the idea so far is that an integer type is easily accomplished using the FFI and user code. Here is one example (not quite what you're asking for, but hopefully instructive): https://github.com/darinmorrison/purescript-int64


As long as it is efficient and compiler supported (I.e an int add ends up as the proper instruction, and improper assignment ends up as a meaningful compiler error) I think the implementation doesn't matter. Going to Natural numbers and such feels like a separate topic from just having efficient and type safe (modulo overflow) integers.


Consider what it would take, writing in C, to store and compute integers in the "double" type, including range checks after every operation. Unless you go the asm.js route or otherwise ensure that your JIT special-cases integers, constructing range-limited integer types out of JavaScript doubles has a non-trivial cost.

Personally, it surprises me that with all the years of extensions to ECMAScript, nobody has added 1) sized integer types like word32, word64, and so on, and 2) transparent bignums, similar to int/long in Python.


I'm not sure I understand the problem, I meant that in PureScript there would be distinct number types, and the PureScript compiler would forbid me to assign an int field with a double value. The generated JS would be all regular JS "numbers" (ie floating point throughout).

I mean there are several compilers from langs with strong type systems, such as F#->JS with FunScript, and in those, usually the source lang has integers...


What happens when you take two "integer" fields and add or multiply them? The compiler would need to check that the result still fits in an integer.


No compiler in existence makes these kinds of checks on anything except maybe literal values in special cases. What you propose would entail evaluating the program's runtime behavior at compile time. Ensuring anything that gets put in an integer field is an integer is easy assuming you have type annotations on everything.

For the cases you mention the checks would simply be: integer * integer = integer, integer + integer = integer. Problems only arise when real/floating point numbers are introduced and how to handle integer division.


>No compiler in existence makes these kinds of checks

If your claim is that no compiler in existence can make these kinds of checks at compile time, that's not true. Any dependently-typed language can do this (Idris, Coq, Agda, ATS). In fact, even a language with refinement types can do this, like Liquid Haskell.

And a number of languages do these checks at runtime: Ada and Nimrod are two that come to mind.


Assuming you mean two 32 bit integers, the result is a new 32bit integer that may have overflown. The compiler doesn't care in any language I know of.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: