To do this the way Dan expects, you either need to have two parallel type systems (like C++'s const - a bad idea, because you ultimately end up needing to write const and non-const versions of methods) or you only allow expressions of immutable types to be made vals.
Having immutable types as a first-class language concept is a good one, and much simpler to reason about, since it states an invariant that the compiler can check and that most programmers understand as a first-level concept.
The author doesn't seem to be complaining about interior mutability, AFAICT.
Scala makes the case the author is complaining about clearer than Kotlin, by using the 'def' keyword (same as functions/methods) for getters rather than 'val' (the latter can only be used for values). Matching the style of the article, it's:
class Person(val birthDay: DateTime) {
def age: Int = yearsBetween(birthDay, DateTime.now)
}
You can't use 'val' here and so excepting interior mutability, Scala 'val's don't change (and importantly, the reference you get by naming one doesn't ever change).
Rust seems to handle that reasonably well, interior mutability[0] aside mutating an object requires having an &mut to it, which you can only get through a mut binding (or a pre-existing &mut).
Swift also classifies methods between regular (accessible on both var and let binding) and `mutating` (only accessible on let bindings), though I think that also works for value types (structs).
> Swift also classifies methods between regular (accessible on both var and let binding) and `mutating` (only accessible on let bindings), though I think that also works for value types (structs).
Just to clarify the terminology, the `mutating` keyword in Swift is only used with value types (`struct` and `enum`) and is only accessible on `var` bindings (`let` bindings are readonly).
IMO allowing interior mutability by default on `let`-bound reference (`class`) types was a mistake in the language design and the same syntax should've been required there, but I assume Apple wanted to make things clearer for programmers who don't have a C/C++/similar background and so don't yet understand the distinction between [stack-allocated, pass-by-copy] values and references [to heap-allocated, reference-counted objects].
In C++ terms:
struct O {int f;};
template<class T> struct SwiftRef : shared_ptr<T> {
template<class U> SwiftRef(U u) : shared_ptr<T>(new T{u}) {}
operator T() const noexcept {return **this;}
};
O struct_var_binding{4};
const O struct_let_binding{4};
SwiftRef<O> class_var_binding{4};
const SwiftRef<O> class_let_binding{4};
// What it should be, IMO: const SwiftRef<const O> class_let_binding{4};
> IMO allowing interior mutability by default on `let`-bound reference (`class`) types was a mistake in the language design and the same syntax should've been required there, but I assume Apple wanted to make things clearer for programmers who don't have a C/C++/similar background and so don't yet understand the distinction between [stack-allocated, pass-by-copy] values and references [to heap-allocated, reference-counted objects].
They also likely didn't want to make the work of bridging/using Obj-C types (which I believe are all reference types) even harder.
Could you explain why would I need to write two versions of methods? I always liked C++ const-ness, the only drawback that I see is that const-ness should be default and something like `sideeffect` modifier should be applied for non-const functions.
In C++, there need to be e.g. two overloaded versions of the index operator on a vector: one taking a const vector reference returning a const reference to the value, and one taking a non-const vector reference returning a non-const reference to the value.
With only the first one you couldn't modify values of a vector using nice [] syntax. With only the second one you couldn't use the same nice [] syntax on const vectors. So you need both. The implementation can be identical, but the method signature is different. You end up with such duplicate functions all the time for accessor methods.
Having immutable types as a first-class language concept is a good one, and much simpler to reason about, since it states an invariant that the compiler can check and that most programmers understand as a first-level concept.