Hacker News new | past | comments | ask | show | jobs | submit login

The problem is the overhead in adding typing. The time taken to learn the type system is a one-off but static typing has running costs in slowing down development, discouraging exploratory programming, extra cognitive load in reasoning about the type system, and "false positives" in disallowing otherwise correct programs. It does nearly eliminate a class of bugs - this is wonderful, but the bugs it does eliminate tend to be ones that are fairly obvious to find.



What about the cognitive load of chasing down things like null reference errors, which in some strongly typed languages don't even exist? And no, they're not always fairly obvious to find, especially when you're fixing somebody else's code.


The keyword being "some". I've rarely had any problems with null/nil reference errors in Clojure, because nil is treated both as false and as the empty list, so most of the time you do the right thing in case of nil. You also don't call methods on objects, but send values to functions, which eliminates a large source of NPE right there.

Clojure is also a compiled language. So things like calling a function with the wrong arity, and calling a variable/function that doesn't exist are caught at compile time. This is, by far, the most common runtime bugs I've experienced in Javascript and Python, due to spelling mistakes in and out of refactoring.


> most of the time you do the right thing in case of nil

I can't remember where, but I was just reading something yesterday to the effect that a solution that works most of the time is worse than a 'solution' that never works: at least you'll notice the latter quickly, whereas you might not notice the former until it's buried so deep in your code that you've forgotten the hidden assumptions it involves.


Well can't be taken seriously. Real life works most of the time. Only theoretical thought experiments could work all the time. Can you show me a clock that worked more than 10K years? They are still trying to invent one (and we're not having it yet)


I agree, and Skinney made a similar point yesterday (https://news.ycombinator.com/item?id=9759571). However, I think that an excessively literal reading misses the spirit of the quote (as I (probably mis)remember it). I elaborated in my reply https://news.ycombinator.com/item?id=9760336 .

Incidentally, your question about the 10K-year clock (to which I think it is impossible to give an answer today even if one exists today, since I think clockmaking has not yet been practiced for 10K years!) reminds me of anecdote #3 in http://www.netfunny.com/rhf/jokes/91q3/oldanecd.html .


By that logic, most object oriented languages like Java, C# and C++ are fundamentally flawed as calling a method only works most of the time.

Besides, code in general only work most of the time. Static typing does not protect you from bugs.


> Static typing does not protect you from bugs.

But it does. It doesn't protect you from all bugs, but nobody claimed it did.


True, should have specified what I meant. The thing is, static typing doesn't necessarily protect you from more bugs than a dynamic language. For instance, Clojure is less prone to NPEs because you don't call methods on objects, and because a lot of your program manipulates datastructures, and nil is treated as an empty datastructure. Of course, a language like Rust, which doesn't have null, does protect against NPEs. It's all relative.


Reading this again make me realize that I'm wrong. Static typing will safeguard you against certain types of bugs that a dynamic language don't. The point I was trying to make was that a static language doesn't necessarily protect you against a certain type of bug, which you may rarely experience in a dynamic language.


I didn't call anything flawed, fundamentally or otherwise; but I take your point.

I think that there is an important distinction between 'almost-there' solutions.

Suppose that you have a piece of code that is specified to work in a certain way. Because code, and the hardware on which it runs, is made (at least indirectly) by humans, it will fail under some conditions; it only satisfies its specification. I think that we are all comfortable with this kind of "works most of the time".

By contrast, consider 'smart' products—for example, the auto-complete on your phone. This also, after a bit of training, works most of the time, and can make some brilliant inferences. However, I think that most people can agree that the "most of the time" for auto-completion is qualitatively different from the "most of the time" for specified software: it is reasonable to rely on the latter, but not, or at least not nearly as much, on the former (http://www.damnyouautocorrect.com).


Oops, I an important word out:

> Because code, and the hardware on which it runs, is made (at least indirectly) by humans, it will fail under some conditions; it only satisfies its specification.

This sentence was supposed to end "it only satisfies its specification some of the time." I was not going for Knuth-ian irony about proving vs testing (https://en.wikiquote.org/wiki/Donald_Knuth#Sourced).


I know you didn't, but if you have a language have a core semantic that only works "most of time," and "most of the time" is worse than "never," then you have a language that is worse than not having a language at all, which means that it is fundamentally flawed. Yes, I probably did enjoy my logic courses at the university too much.


But they are. As in, they fail to handle the null case properly.


Do you have any evidence to prove this? I've read a few papers that disagree with your opinions...


That's the thing though, there are lots of papers that support both sides of the argument out there. For example, take this http://macbeth.cs.ucdavis.edu/lang_study.pdf large scale study of GitHub projects. In the study, Clojure is right up there for correctness with hardcore statically typed languages like Haskell.

There's plenty of software written in both typing disciplines in the wild. Yet, nobody managed to conclusively demonstrate that software written in statically typed languages is produced faster and with less overall defects or that it has lower maintenance cost. The very fact that we're still having these debates says volumes in my opinion.

Majority of arguments regarding benefits of static typing appear to be rooted squarely in anecdotal evidence.


I agree with this comment of yours but the first one has a few overstatements and that prompted my reply. Also if you feel that strong types slow you down I suggest you try some gradually typed language where you can start out without types, do the exploring and add the types when you know what you want. I've tried this in TypeScript and it was a fairly good experience.


It wasn't my intention to overstate the benefits of dynamic typing, just to point out that static typing isn't just a free lunch. I think yogthos sums up the situation pretty well.


Clojure also takes this approach with core.typed as well https://frenchy64.github.io/2015/06/19/gradual-typing.html

I personally haven't found a place where this is necessary yet, but who knows what might happen in the future. :)


Probably based on personal experience. For me, whenever I start a project from scratch I usually start with the most dynamic environment I can find, then hack away until I have something that resembles the functionality I need, before locking it down. In the beginning of a project I don't care if something is 100% correct, I just want to see what my idea looks like, and what I actually need. I don't want to deal with synchronizing my types with a database schema, or make sure the JSON I send to the server satisfy the type system. I really don't care about interfaces or abstract classes or whatnot, I just want to retrieve the value I know to be in the object I got. Then, when I have a working'ish prototype, I can start nailing it down with types, assertions, tests and contracts.

Of course, people are different. Some people do all of this on paper before writing a single line of code. Others plan by making types and interfaces, then proceed to implement those. I just like making prototypes.


"Some people do all of this on paper before writing a single line of code. Others plan by making types and interfaces, then proceed to implement those. I just like making prototypes."

Of these, I sound the most like you - think just a little, then sit down and start coding to get a feel for things.

Yet I find a good type system essential (or at least conspicuously missing when I write Python). There is a notion that, if one has static type checking, one has to get the types right before writing code. There's no reason that has to be the case. Get a sketch, start writing code, refine your sketch. With type inference, the sketch can even be somewhat incomplete and still help me find where my assumptions clash.


Don't know why people are downvoting you, your approach is perfectly valid.


That's a really important point. Lots of time sunken in confirming the compiler that could have been used in exploratory programming. Humans are fundamentally explorers not conformists.

Nice duality you exposed right there!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: