I followed a similar trajectory. Types were the bane of my early career. Hideous, extraneous.
But really, they're the light at the end of the tunnel once you've worked your way though the dynamic / weak typing minefield. It took me a lot of Python, Javascript, and Ruby for me to get there, but now I'm way more comfortable on the other side.
The correct type system is actually way more expressive than not having strong static types. Sum types let you combine multiple return types elegantly and not be sloppy. Option types remind you to check for an absent value.
Static types let you refactor and jump to definition quickly and with confidence.
Your interfaces become concrete and don't erode with the sifting sands of change. As an added bonus, you don't need to precondition check your functions for type.
Types are organizational. Records, transactional details, context. You can bundle things sensibly rather than put them in a mysterious grab bag untyped dictionary or map.
Types help literate programming. You'll find yourself writing fewer comments as the types naturally help document the code. They're way more concrete than comments, too.
With types, bad code often won't compile. Catching bugs early saves so much time.
Types are powerful. It's worth the 3% of extra cognitive load and pays dividends in the long haul. Before long you'll be writing types with minimal effort.
Same here! I've started with C++ and Java, learned to hate excessive typing, went through a long period of dynamic typing, and now I'm at the point you and the the author are.
I still code a lot of Common Lisp on the side, but my Lisp code now looks entirely different than it looked just 3 years ago. The language standard does support optional typing declarations, and there's an implementation (SBCL) that makes use of it to both optimize code and provide some static typechecking at compile time (with type inference). So my Lisp code now is exploiting this, and is littered with type declarations.
However, the CL type system is very much lacking compared to Rust or Haskell. I'm hoping one day someone will make a statically, strongly typed Lisp that still doesn't sacrifice its flexibility and expressive power. I'd jump to that in an instant.
My experience with strong types is limited-- I'd done the thing of learning some C, doing professional stuff in Ruby for several years and then discovering the ridiculous power strong types can have and doing some professional stuff in Go.
Typed Racket [1] was really a revelation to me in that regard. I'd be curious how developers with more strongly-typed language experience feel about it.
> However, the CL type system is very much lacking compared to Rust or Haskell. I'm hoping one day someone will make a statically, strongly typed Lisp that still doesn't sacrifice its flexibility and expressive power. I'd jump to that in an instant.
> I'm hoping one day someone will make a statically, strongly typed Lisp that still doesn't sacrifice its flexibility and expressive power. I'd jump to that in an instant.
This part inspired me to look up the wiki page "Haskell Lisp [1], because I somehow remembered that some people were trying to make a Haskell that could be written in Lisp. But this page reveals even more interesting efforts:
> Shentong - The Shen programming language is a Lisp that offers pattern matching, lambda calculus consistency, macros, optional lazy evaluation, static type checking, one of the most powerful systems for typing in functional programming, portability over many languages, an integrated fully functional Prolog, and an inbuilt compiler-compiler. Shentong is an implementation of Shen written in Haskell.
> Liskell - From the ILC 2007 paper: "Liskell uses an extremely minimalistic parse tree and shifts syntactic classification of parse tree parts to a later compiler stage to give parse tree transformers the opportunity to rewrite the parse trees being compiled. These transformers can be user supplied and loaded dynamically into the compiler to extend the language." Has not received attention for a while, though the author has stated that he continues to think about it and has future plans for it.
But this page does not list everything and there is Hackett [2], which introduces itself with "Hackett is an attempt to implement a Haskell-like language with support for Racket’s macro system, built using the techniques described in the paper Type Systems as Macros. It is currently extremely work-in-progress." - though it seems that it didn't change since two years.
And finally there is Axel [3] - which introduces itself with "Haskell's semantics, plus Lisp's macros.
Meet Axel: a purely functional, extensible, and powerful programming language."
Disclaimer: I never learned any Lisp, went from C/Java/JavaScript/Bash straight to Haskell and am a Haskell beginner for lifetime. Though I love the language and the fact that I will be learning it and surprised by it for the rest of my life.
> But really, they're the light at the end of the tunnel once you've worked your way though the dynamic / weak typing minefield. It took me a lot of Python, Javascript, and Ruby for me to get there, but now I'm way more comfortable on the other side.
To me that was not the issue. It was, rather, discovering languages with powerful and expressive type systems.
My first job was in Java, most of my career afterwards was in Python. I've been type-curious for a while because of Haskell and OCaml and am very fond of Rust, I'd take a job in any of those happily.
Types in Java are still, today, largely verbose, hideous and extraneous. The cost / benefit is extremely low (or rather extremely high, you pay a very high cost for limited benefit, and the cost generally increases faster than the benefits). You can leverage types heavily, but it creates a codebase which is ridiculously verbose, inefficient (because every type is boxed), opaque, and simply doesn't look like any other Java codebase so will be very much disliked by most of the people you're working with. And the benefits from that will still, at the end of the day, be rather limited.
Me too, but I learned in Turbo Pascal and early Java which had some real limitations to their type systems. Imagine - strings of different lengths being incompatible types, arrays of different lengths being incompatible types, no generics, so no standard collections, no serialization, tons of manual typecasting. Having to write separate methods for every possible set of parameters.
Out of nostalgia, getting so frustrated with dynamic-typed code, I once tried to go back to some of that old code and make it use JSON instead of proprietary formats. That was a nightmare.
In the dynamic languages it would be utterly trivial. Just call json_encode($whatever), or $whatever = json_decode($some_string).
Modern languages with modern type systems, inference, generics, etc. that make things like that possible and relatively clean completely change the picture.
Another thing that bugs me about dynamic languages is of course you have to manually check everything all the time because the compiler can't. We used to complain about the bloat of having to write all those type names and casts, but dynamic code, if it has good checks, can actually be more bloated in addition to being less expressive.
Nothing good comes easy. The dozens of hours I'd spend staring at 26 lines in R just trying different ideas to shorten/optimize/improve clarity, and that wasn't something I needed to sell that someone else would depend on for business or personal use.
But I can relate to the pressure to deliver quick results. I found myself burnt out when working on a forecast model around three years ago. The constant "how's it goin'?" tore my attention away from the work, and I'm still convinced I could have delivered a better result.
So, in a way, I agree. In another, I understand the other side of the issue, and I think there are so many less time-intensive tasks going on around engineering that there's often little awareness that something like refactoring a class for better efficiency pays in smaller but compounding ways long-term, with most of the time cost and perceived opportunity cost being immediate and short-term. It's still worth it if you really do the math on the long-term benefit.
Sum types even work when you actually have the multiple of the same return types. (Ie in Haskell `Either String String` works just as well as `Either String Int`; the types don't have to be distinctive.)
Also, `Maybe (Maybe a)` works correctly, contra possibly-null values in dynamically-typed languages. It's a surprisingly significant issue given how seemingly trivial it can sound.
But really, they're the light at the end of the tunnel once you've worked your way though the dynamic / weak typing minefield. It took me a lot of Python, Javascript, and Ruby for me to get there, but now I'm way more comfortable on the other side.
The correct type system is actually way more expressive than not having strong static types. Sum types let you combine multiple return types elegantly and not be sloppy. Option types remind you to check for an absent value.
Static types let you refactor and jump to definition quickly and with confidence.
Your interfaces become concrete and don't erode with the sifting sands of change. As an added bonus, you don't need to precondition check your functions for type.
Types are organizational. Records, transactional details, context. You can bundle things sensibly rather than put them in a mysterious grab bag untyped dictionary or map.
Types help literate programming. You'll find yourself writing fewer comments as the types naturally help document the code. They're way more concrete than comments, too.
With types, bad code often won't compile. Catching bugs early saves so much time.
Types are powerful. It's worth the 3% of extra cognitive load and pays dividends in the long haul. Before long you'll be writing types with minimal effort.