Hacker News new | past | comments | ask | show | jobs | submit login

What about the cognitive load of chasing down things like null reference errors, which in some strongly typed languages don't even exist? And no, they're not always fairly obvious to find, especially when you're fixing somebody else's code.



The keyword being "some". I've rarely had any problems with null/nil reference errors in Clojure, because nil is treated both as false and as the empty list, so most of the time you do the right thing in case of nil. You also don't call methods on objects, but send values to functions, which eliminates a large source of NPE right there.

Clojure is also a compiled language. So things like calling a function with the wrong arity, and calling a variable/function that doesn't exist are caught at compile time. This is, by far, the most common runtime bugs I've experienced in Javascript and Python, due to spelling mistakes in and out of refactoring.


> most of the time you do the right thing in case of nil

I can't remember where, but I was just reading something yesterday to the effect that a solution that works most of the time is worse than a 'solution' that never works: at least you'll notice the latter quickly, whereas you might not notice the former until it's buried so deep in your code that you've forgotten the hidden assumptions it involves.


Well can't be taken seriously. Real life works most of the time. Only theoretical thought experiments could work all the time. Can you show me a clock that worked more than 10K years? They are still trying to invent one (and we're not having it yet)


I agree, and Skinney made a similar point yesterday (https://news.ycombinator.com/item?id=9759571). However, I think that an excessively literal reading misses the spirit of the quote (as I (probably mis)remember it). I elaborated in my reply https://news.ycombinator.com/item?id=9760336 .

Incidentally, your question about the 10K-year clock (to which I think it is impossible to give an answer today even if one exists today, since I think clockmaking has not yet been practiced for 10K years!) reminds me of anecdote #3 in http://www.netfunny.com/rhf/jokes/91q3/oldanecd.html .


By that logic, most object oriented languages like Java, C# and C++ are fundamentally flawed as calling a method only works most of the time.

Besides, code in general only work most of the time. Static typing does not protect you from bugs.


> Static typing does not protect you from bugs.

But it does. It doesn't protect you from all bugs, but nobody claimed it did.


True, should have specified what I meant. The thing is, static typing doesn't necessarily protect you from more bugs than a dynamic language. For instance, Clojure is less prone to NPEs because you don't call methods on objects, and because a lot of your program manipulates datastructures, and nil is treated as an empty datastructure. Of course, a language like Rust, which doesn't have null, does protect against NPEs. It's all relative.


Reading this again make me realize that I'm wrong. Static typing will safeguard you against certain types of bugs that a dynamic language don't. The point I was trying to make was that a static language doesn't necessarily protect you against a certain type of bug, which you may rarely experience in a dynamic language.


I didn't call anything flawed, fundamentally or otherwise; but I take your point.

I think that there is an important distinction between 'almost-there' solutions.

Suppose that you have a piece of code that is specified to work in a certain way. Because code, and the hardware on which it runs, is made (at least indirectly) by humans, it will fail under some conditions; it only satisfies its specification. I think that we are all comfortable with this kind of "works most of the time".

By contrast, consider 'smart' products—for example, the auto-complete on your phone. This also, after a bit of training, works most of the time, and can make some brilliant inferences. However, I think that most people can agree that the "most of the time" for auto-completion is qualitatively different from the "most of the time" for specified software: it is reasonable to rely on the latter, but not, or at least not nearly as much, on the former (http://www.damnyouautocorrect.com).


Oops, I an important word out:

> Because code, and the hardware on which it runs, is made (at least indirectly) by humans, it will fail under some conditions; it only satisfies its specification.

This sentence was supposed to end "it only satisfies its specification some of the time." I was not going for Knuth-ian irony about proving vs testing (https://en.wikiquote.org/wiki/Donald_Knuth#Sourced).


I know you didn't, but if you have a language have a core semantic that only works "most of time," and "most of the time" is worse than "never," then you have a language that is worse than not having a language at all, which means that it is fundamentally flawed. Yes, I probably did enjoy my logic courses at the university too much.


But they are. As in, they fail to handle the null case properly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: