Hacker News new | past | comments | ask | show | jobs | submit login
Death of a Language Dilettante (dadgum.com)
122 points by doty on May 24, 2016 | hide | past | favorite | 65 comments



I find there are roughly two categories of languages that are worthwhile (based on my opinion of course).

1. Easy to write/produce: Languages/Framework that are easy to write because they are highly expressive (Haskell/Scala) or are very opinionated (Rails).

2. Easy to read/maintain: Verbose languages with excellent tools... cough... Java/C#.

As for reading code I don't know what it is about crappy verbose languages but I have yet to see Java/C# code that I couldn't figure out what was going on. Sure I have more experience with these languages and the tools (particularly code browsing with an IDE) make it so much easier... but so do most people.

The reality is language dilettantes think writing code is painful (as mentioned in the first paragraph by the author) but the real bitch is maintaining.

I feel like there must be some diminishing returns on making a language too expressive, implicit, and/or convenient but I don't have any real evidence to prove such.


I would add a third category, which is not exclusive with the other two: languages that provide good tools for abstraction.

The pitfall with #1 is leaky and obscure abstractions. It's easy to write code that has performance problems or requires a lot of understanding of moving parts not actually related to the problem at hand. Where's the code responsible for putting the current state on a web page? All I see is a bunch of monad transformations and I don't know what they're for! Sure, I can figure out what's going on eventually, but I'll have to read a lot of CS papers first.

The pitfall with #2 is lack of ability to write a suitable abstraction for the problem. Instead, the problem has to be fit to the language. You end up with either something relatively simple, but inflexible or a large amount of incidental complexity. Why do I need to implement AbstractThingPutterOnPageGenerator and generate a ThingPutterOnPage before I can put a thing on the page? Couldn't this just be called putThingOnPage() and use some optional args when the default behavior doesn't cut it? Sure, I can figure out what's going on eventually, but I'll have to read a lot of code first.

I think Lisp has always been strong in the third category, and that Clojure is a Lisp especially suited to real-world use right now. The heavy emphasis on defining code in terms of generic operations on generic data structures is a particular strength. For something more mainstream, Python does pretty well here. That's largely cultural though; Python has a very comparable feature set to Ruby, but Ruby's community doesn't have "explicit is better than implicit", the lack of which can lead to code which is impenetrable rather than merely dense.


The problem with Lisps is that generally speaking, they're all interpeted, which means type errors are discovered at runtime. Which sucks for maintenance.


Most implementations of Common Lisp have ahead of time compilation, at least as an option, but also have the compiler, or sometimes a different compiler or an interpreter available at runtime. Clojure is also typically AOT-compiled to JVM bytecode.

Did you mean that Lisps are dynamically-typed? That's true, and whether it's mostly good or mostly bad is a religious topic that almost certainly lacks one true answer. My own take on it is that I program very interactively and static typing feels like an impediment to that most of the time. Furthermore, type errors are usually a small subset of the possible errors and many static type systems allow any type to be null anyway, drastically reducing the benefit.


Common Lisp is compiled and has a type system.

Several type systems.


> I feel like there must be some diminishing returns on making a language too expressive, implicit, and/or convenient but I don't have any real evidence to prove such.

I think it is understood that the more expressive your language is, the more difficult it is to make tools for the language. For example, Common Lisp style (non-hygienic) macros are hard to support in a debugger (by which I mean, hard to allow the developer to step through their code as they wrote it, rather than stepping through the final expanded form). Dynamic dispatch makes it difficult for tools to provide who calls and which function does this call invoke (not impossible, with some forms of static typing, but more difficult in general).


I think I agree with that idea. However I have also wondered if it is because more explicit/verbose languages take longer to physically write and thus the verbose pattern is sort of repeated through out. For example in Java it is typical to have ridiculously long spelled out variable/function names. While this is annoying to write it often makes maintenance slightly easier for a variety of hopefully obvious reasons.

It seems with really expressive languages you get programmers who will use extremely short variable/function names (Haskell being the extreme). Of course this could be just cultural (e.g. Haskell academia). That is it seems when the language gets easy people get lazy :) (this is probably a false assumption).

I'm not sure if its analogous but an extreme opposite of expressive language would be punch cards. My grandmother used to work on ancient computers and you would have to really think ahead what you wanted to do. Consequently lots and lots of documentation would be done.


I've programmed in a couple of languages that one might consider verbose: Java and Common Lisp. The normal tools for both of those provide some form of name-completion. In fact, in Emacs+SLIME, I can do something like this: "(ge-in-ru", hit tab, and have it completed to "(get-internal-run-time". In any case, I consider the typing-out of stuff like that to be a minor part of programming anyway.

Edit: fixed stupid grammatical mistake


I don't think symbol name length is usually what people are complaining about when they call a language verbose. It's usually closer to the number of symbols required to write a program, and CL usually does pretty well on that metric.


I think the real issue is total cognitive load.

There's the cognitive load of the language itself. There's the cognitive load of the libraries. There's the cognitive load of the algorithm. And there's the cognitive load of the actual code (number of lines times how hard each line is to read - smart coding conventions help quite a bit here).

But it's not that simple, because people are different. Different people have different cognitive load when presented with the same language. I think this is one of the reasons Haskell is so polarizing - it either has a low cognitive load for you, or a very high one. And if it has a very high one, you're not likely to spend the time and effort to get to the point where it has a low cognitive load for you.

> I feel like there must be some diminishing returns on making a language too expressive, implicit, and/or convenient but I don't have any real evidence to prove such.

I think that going to far in any of those directions probably increases total cognitive load, by making some other component worse.


I think it's incorrect to point at just the languages though, I feel many methodologies or standards help/hinder this too. And tools. And even, to some extent, communication and/or culture (culture of communication).


I admittedly don't have to read Java or C# code often; a few times I had to though, it had been a fair bit of pain -- so, I would much rather have to figure out someone's Perl code rather than deal with either one of these.

The problem was not with the languages themselves, they are just fine, and I actually quite like C# -- but it seems that a lot of third-party library authors for these languages really go all out on various design patterns, abstracting everything, etc., in the process making simplest things quite impenetrable.

Could have been just my luck though.


It's two extremes of the same problem. Perl code generally doesn't abstract enough, while Java code often abstracts too much. I recently was tasked with porting a legacy perl system to Java. The main pl file of the legacy code was only about ~6k lines, but it also only contained 6 subroutines in the entire thing. The final Java product probably had more actual code glueing everything together and abstracting it, but the meaty parts were much cleaner, simpler, and easier to understand. I was able to show newcomers the new system and within a few minutes they could figure out at least what the major moving parts do, while they would immediately run away form the perl code due to the sheer scariness of it.


There are a couple of implicit assumptions in the final paragraphs that I think should be made explicit. One was that we can evaluate these languages based on this experiment with a single programmer. Another was the not-clearly-defined term strong work ethic, by which I think he means someone who will strive to make the program work properly, not have horrible kludges, will avoid known problematic aspects of the language, etc.

The problem with these assumptions is that you don't run into situations like that often. You're far more likely to run into a team of people of mixed abilities, and with some languages, one or two of them will be able to inflict horrors on the whole codebase.


> Another was the not-clearly-defined term strong work ethic, by which I think he means someone who will strive to make the program work properly, not have horrible kludges, will avoid known problematic aspects of the language, etc.

Indeed. Using a different definition of "strong work ethic", I've met programmers who had too strong a "work ethic" - using it as an excuse or crutch to scoff at improvements to code readability or maintainability. After all, if you just power through it with enough overtime, you can wade through even the worst codebases, so why bother cleaning it up? To make things easier? And you want to take a break to step back, think on the problems, and discuss options instead of just sitting down and coding more? Sounds like you're just looking for excuses to be lazy - put down the coffee and get back to work!

Needless to say, this can lead to a lot of firefighting and damaged morale.

Ivory tower academics can get too caught up on theory to practice effectively. On the other hand, that's probably still preferable to the COBOL-only programmer that doesn't understand why things have changed since the 1970s - after all, COBOL can do anything your newfangled languages can! Better than either: Give me a practical polyglot. Preferably one who hates whatever terrible language we're going to be using, with a laundry list of issues that language has to back up that hate. Why such a hater? Because that hate sounds like the impassioned voice of experience with these problems (and how to mitigate or avoid them, even if one of those options - switching languages - isn't on the table.)


I just tried Elm and I can assure you that bad programmers can write bad code in any language, good or bad.


The problem is that in some languages, good programmers cannot help but write bad code, and will review bad code without catching all the bad.


I'm not disagreeing with that, what I am saying is that some languages offer up a power to perpetrate real horrors that some other languages don't. For instance, much as I love Common Lisp, I'd hate to work in a group with people who don't know how to write macros but do so anyway. There are all sorts of things one can get up to in C or assembly that you can't get up to in Java (deliberately treating a chunk of memory as if it were a type other than it really is, for example). Some languages like Smalltalk don't have a way of enforcing privacy on APIs, unlike say Java, so on a large project you can find out that someone you don't even know has just started using your private APIs and you are now obligated to support them as if they were public APIs, restricting your own freedom to change internals. These are all problems I have run into.


To me "bad programmers can write bad code in any language" is a snarky critique on favoring laws over conventions. There is this hope that given strong laws (safeguards), collaborating with unreasonable people is easier. Then the discussion devolves into the use of dangerous features versus the need to work with unreasonable people.

In my view, the discussion should be about comfort. At what abstraction levels will we work? If you put in a lot of safeguards, you can work comfortably at a certain level. But if you want to move out of this band, be it to write some low-level glue-code, or some higher abstractions, you find the safeguards to be a barrier. If those safeguards are conventions, you can agree to break them, if they are laws, you must subvert them or not work at those levels.

I prefer conventions as much as I prefer working with reasonable people. And sometimes turning conventions into law has few downsides, like with the private keyword.


The 'single programmer' bit is the most damning IMO. It assumes that a programming language is objectively good or bad, when they are instead a medium for creation where the personal affinities of the creator is the most important factor.


Precisely: language/library/framework decisions affect the whole team. If you choose something weird, you'll quickly be cursed under your teammate's breaths--if not get screamed at to your face--because eventually they're going to need to work on your code, too.

That's not to say you can't bring new technologies into a company. I've done it several times. You just need to understand that it's a big undertaking to get an entire team to buy in and learn that new tech.

Of course for hobby projects, go hog wild. That's how I pick up new languages.


Temper the soldier rather than steel, and a club becomes a sword. A fairer example might be to compare the Nix and Guix package manager codebases, which aim to implement the same model of declarative system wide dependency management. The former is written by a university team in C++ and Perl, the latter by GNUcolytes in Guile Scheme and a touch of C.


> Temper the soldier rather than steel, and a club becomes a sword.

WTF, I haven't heard that one before. Did you make it up?

When I google it, your above comment is the third hit, and the previous two aren't relevant.


The exact phrase, yes, but I read "Temper the fighter, not the sword" here: [0].

The general observation that either the tool or the operator can be improved in an operator-tool system is semi-regularly invoked in discussions of new programming languages/EDC multi-tools/governance models within my conversational circles.

[0] https://www.reddit.com/r/systema/comments/3lcn5q/systema_lin...


Well, which do you say is better?


None sadly, one will be relegated to the "just an academic curiosity" phase, the other one to obscurity. [1]

1: https://news.ycombinator.com/item?id=10005646


I'm wondering too ^-^


That's a very nice proverb, love it.


My take away here is that it only pays to be a programming language dilettante if you are actually building a programming language, especially if it's a dedicated language for a new platform. Otherwise, you're mostly going to be subject to whatever's already mostly in use on your platform of choice, up to minor tweaks in that language over time.


Unfortunately, the real value of anything is almost entirely due to extrinsic factors. Air? Very valuable if you're underwater, on the moon etc.

Which human language? The one spoken by the people you need to communicate with is most valuable.

The first iPhone? Very valuable then; not today.

But some people love intrinsic value. And it's what they create that ends up having real value. They would say that intrinsic value is the only "real" value. They aren't very practical.


> They would say that intrinsic value is the only "real" value.

Thing is, this is a testable hypothesis (at least in theory): measure whether those "intrinsically valued" languages make a true impact on software cost. Often, it is the very same people who tout this intrinsic value who deliberately shy away from testing this hypothesis empirically.

It's interesting that when Java's original designers analyzed customer needs vs. features offered by academic languages, they discovered that most value in those languages wasn't in the linguistic features but in the extra-linguistic features, so they deliberately put all the good stuff in the VM, and packaged it in a language designed to be as unthreatening and as familiar as possible. It was designed to be a wolf in sheep's clothing:

It was clear from talking to customers that they all needed GC, JIT, dynamic linkage, threading, etc, but these things always came wrapped in languages that scared them. -- James Gosling[1]

[1]: https://www.youtube.com/watch?v=Dq2WQuWVrgQ


Finally watched the whole thing, thanks again! Interesting and reassuring. Esp. the end: for Oracle, altruism is collateral damage.

Value types would be helpful to a serialization algebra I've worked on.

I can google this, but since this 2014 talk, do you know the real world adoption of java lambdas, and actual performance benefits of their parallelism? I really liked Steele's re-tree idea, but doesn't seem to have that much benefit. Of course, if no dependencies between data, speed up should be great). The real issue is performance is mostly not an issue - hence python/ruby success.


Thanks, I've just watched it (only up to your quote).

I agree, but how did customers need JIT? I thought it was just to compensate for the inherent performance penalty of an extra layer, of a VM.

BTW: Android java now uses ahead-of-time compilation (5.0 switched from JIT dalvik to AOT art).

Just on the "wolf" part: not only was the language familiar (sheep's clothing), but features were removed. Not just for familiarity, they actually caused problems and so removing them was an improvement.

e.g. removing operator overloading (which C++ had): apart from very fundamental maths (such as complex numbers and matrices), these caused a lot of problems. Probably because they were algebras designed by non-mathematicians. Removing them hurt matrix algebra etc, but was by far a net benefit. (BTW a nice thing about shader languages is matrices as first class values).

Finally, taking this back to my GP comment on intrinsic vs extrinsic: I was thinking of the language as a "product" which would include the VM... but I think your approach is better. Along those lines, performance, bugginess, portability of the runtime etc are also non-linguistic features, strictly speaking.

Actually testing that hypothesis is really difficult. I've only heard of a few attempts to measure productivity in different languages, and their experimental design is not very compelling. Of course, in practice, all those non-linguistic factors dominate.

Yet, some of James' non-linguistic "wolf" features have linguistic counter-points: e.g. no memory management; threading. Therefore, they are examples of linguistic features with "intrinsic value". I'd also include references (instead of pointers), and array-bounds checking (um... is that last one a "linguistic" feature?)

I think some language features have real value - though, as with java's inspirations, probably not the whole language, just particular features.

[ But I meant by my statement, that these folks think intrinsic value is the only "real" value, that they dispute extrinsic value entirely! i.e. that ideas are eternal, valuable absolutely and despite context; and contingent fluctuations in supply and demand, progress of technology and install base etc are 100% meaningless.

Like the truth of a theorem, regardless of its usefulness. ]


> I agree, but how did customers need JIT? I thought it was just to compensate for the inherent performance penalty of an extra layer, of a VM.

Good -- and simple[1] -- abstractions require lots of dynamic dispatch, that can be optimized away by a JIT. The problem is far simpler (and may be solved partially, AOT, if you don't have dynamic linking and especially dynamic code loading, as is the case on Android).

> and array-bounds checking (um... is that last one a "linguistic" feature?)

I would definitely classify that as extra-linguistic.

> But I meant by my statement, that these folks think intrinsic value is the only "real" value, that they dispute extrinsic value entirely! i.e. that ideas are eternal, valuable absolutely and despite context; and contingent fluctuations in supply and demand, progress of technology and install base etc are 100% meaningless.

I understand and completely agree.

[1]: You can do away with a lot of dynamic dispatch at the cost of a larger number (and therefore less simple) abstractions, as in the case of Rust.


> Good -- and simple[1] -- abstractions require lots of dynamic dispatch

So they don't need the JIT per se, but for linguistic abstractions... to be performant, which is non-linguistic. That's splitting hairs though. So, James' customers needed those abstractions, and _he_ said they needed a JIT.

Incidentally, I've been doing dynamic code loading on Android 5.0, with its AOT. So it works. For my use, it's hard to tell if its startup is slower, though I'd expect it to be.

I hadn't heard that about Rust.


> Incidentally, I've been doing dynamic code loading on Android 5.0, with its AOT.

How does that work? Or is their AOT really a JIT that works all at once, rather than collecting a profile first?

> I hadn't heard that about Rust.

Basically, in Rust (as in C++) you can pick either static dispatch abstractions, which are "zero cost", and dynamic dispatch abstractions, that are more costly. On the JVM you get dynamic dispatch as the abstraction, and the JIT figures out whether static dispatch can suffice per callsite, and compiles the dynamic abstraction to static-dispatch machine code.


Sorry for the delay. It works just the same as dalvikvm, using DexClassLoader. I guess it must just compile then run - like a JIT without profiling, as you say. But I don't know the innards.

Thanks for the info on rust.


Give me Scala and a real-world problem vs someone using using PHP or Javascript and I will beat them on the initial write, and destroy them on the maintenance. I wouldn't use Scala professionally if I didn't believe this.

In the short term practical concerns can be more important than PLT ones - in five years' time I'm sure Idris will be a better language than Scala, but for some tasks it isn't yet - apart from anything else, you need a strong library/tool ecosystem before a language is truly useful. But that's a temporary state of affairs. If you were making this kind of judgement 20 years ago, and chose a popular language like Perl or TCL or C++ over a theoretically-nice language like OCaml or Haskell, how would you be feeling about that decision today?


Give me Java and a real-world problem vs someone using using Scala and I will beat them on the initial write, and destroy them on the maintenance. :)

> in five years' time I'm sure Idris will be a better language than Scala

Idris? In the entire history of computing there has been a single[1] complete non-trivial (though still rather small) real-world program (CompCert) written in a dependently typed language. Even though the program is small and the programmer (Xavier Leroy) one of the leading dependent-type-based-verification experts, the effort was big (and that's an understatement) and the termination proofs proved too hard/tedious for him, so he just used a simple counter for termination and had a runtime exception if it ran out. Idris is a very interesting experiment, I'll give you that. But I don't see how anyone can be sure that it would work (although you didn't say it can work, only that it would "be a better language than Scala", so I'm not sure what your success metrics are).

[1]: Approximately, though I don't know of any other.


> Give me Java and a real-world problem vs someone using using Scala and I will beat them on the initial write, and destroy them on the maintenance. :)

Seems we have a true disagreement.

> Idris? In the entire history of computing there has been a single[1] complete non-trivial (though still rather small) real-world program (CompCert) written in a dependently typed language.

Five or ten years ago how many such programs were there in a language with higher-kinded types? Thirty years ago how many with type inference? Innovation is slow - perhaps five years was too optimistic, looking at the history - but it does happen; PLT ideas do eventually make their way into mainstream languages.

> although you didn't say it can work, only that it would "be a better language than Scala", so I'm not sure what your success metrics are

I think it will be the most effective (for real-world problems) general-purpose programming language - a spot I think Scala currently holds. Hard to define objectively of course.


First, higher-kinded types is a feature; dependent types is an entire philosophy. Second, I think you're misjudging the adoption of those languages/ideas: the percentage of real-world programs using HM type-systems (and similar) has hardly changed in the past two or even three decades. Scala is special because it's a single language with lots of paradigms. If you count only those who make good use of sophisticated typed abstractions in Scala and add those to the HM languages, there would still be a very small uptick. The major change in the past few years, I think, has to do with mainstream adoption of higher-order functions. That idea took fifty years to break into the mainstream.

As to language effectiveness, I can't argue with you because neither of us has any real data, but I can say that there's a lot of religion surrounding the question of how much linguistic features (as opposed to extra-linguistic ones, like GC) actually increase productivity. What is certain is that we still haven't broken the 10x productivity boost Brooks said wouldn't happen between 1986 and 1996, and it's been thirty years -- not ten -- and it seems like we won't do it in another decade.


> higher-kinded types is a feature; dependent types is an entire philosophy

Totality is a philosophy; you can have dependent types as a feature without it. Maybe immutability or purity are better comparisons for what Idris brings to the table, but if you're just talking about dependent types then I'm using them already.

> the percentage of real-world programs using HM type-systems (and similar) has hardly changed in the past two or even three decades.

Is that really true? I can't imagine a recruiter asking about Haskell, or a Facebook-sized company talking about their OCaml strategy, ten years ago.

> As to language effectiveness, I can't argue with you because neither of us has any real data, but I can say that there's a lot of religion surrounding the question of how much linguistic features (as opposed to extra-linguistic ones, like GC) actually increase productivity.

That feels like gerrymandering your definitions to me. GC is usually a language-level feature.

> What is certain is that we still haven't broken the 10x productivity boost Brooks said wouldn't happen between 1986 and 1996, and it's been thirty years -- not ten -- and it seems like we won't do it in another decade.

How would we tell? The productivity of the technology industry as a whole has certainly risen enormously. My general impression is that coding is the bottleneck a lot less often - even for a technology company - than it was five or ten years ago.


> Is that really true? I can't imagine a recruiter asking about Haskell, or a Facebook-sized company talking about their OCaml strategy, ten years ago.

I think it may have risen a tiny bit but here's why I think it appears larger than it is: I don't think a recruiter would ask about Haskell today outside a very small section of the ecosystem, but that section seems just larger than it is. What has changed in the past 20 years is the cultural prominence of the startup culture (SV startups hardly make up a minuscule percentage of the software industry, yet they get a significant portion of the media coverage), and even then only if you're involved in communities like Reddit and HN.

I think it is more an artifact of those communities that using certain languages lends a certain prestige, which is then used as a marketing effort (the CTO of a well known SV startup once told me that they have a small team using Scala only so they could attract a certain crowd). Similarly, in Facebook, it is my understanding that Haskell isn't really spreading, but is more of a marketing gimmick to developers of a certain sort in an environment with very unique characteristics. The entire software development industry consists of, I'd make an educated guess, 20 million developers or more (extrapolating from the known number of ~10M Java developers). I would be immensely surprised if more than a million of them can tell you what Haskell is (and by that I mean "a pure functional language").

So there's definitely an uptick in numbers and a rather strong uptick in exposure -- assuming you're following the right online communities -- but I don't think there's a real uptick in portion of production systems. Some people used Lisp and ML in production 20 years ago, too. They just didn't have HN to tell everyone about it. I'm not even sure we're currently at '80s level. Haskell is certainly talked about and used less than Smalltalk in the '80s, and see where it is today.

What is definitely true that some ideas that had previously been associated with functional programming, most notably higher-order functions, have now finally made it into nearly all mainstream languages.

> That feels like gerrymandering your definitions to me. GC is usually a language-level feature.

I think the distinction between language-level abstractions and something like a GC (or dynamic linking) is rather clear, but I won't insist.

> How would we tell? The productivity of the technology industry as a whole has certainly risen enormously. My general impression is that coding is the bottleneck a lot less often - even for a technology company - than it was five or ten years ago.

Is it? I've been a professional developer for 20 years and while I agree that productivity has gone up considerably (maybe 2-3x) it can be attributed nearly entirely to automated tests and GCs.


Ten years ago even startups were largely Java-only when I looked (at least here in London). Five years ago nowhere was advertising pure-Scala/Haskell/OCaml/F# jobs at all. I heard about OCaml at Facebook from friends working there before the big public announcements about it. It's hard to separate general trends from my own trajectory of course.

> Is it? I've been a professional developer for 20 years and while I agree that productivity has gone up considerably (maybe 2-3x) it can be attributed nearly entirely to automated tests and GCs.

I think productivity is noticeably higher than even five years ago, when we already had GC and widespread testing.


I see gloves thrown. When does the contest begin?


> like Perl or TCL or C++

how about PHP or Javascript?


Same thing I think, though I don't think anyone would have tried to write a serious program in JavaScript 20 years ago.


It seems to me that if everyone followed the kind of pragmatism that this post argues for, nothing new would ever be adopted.


That would be a feature not a bug.

Every developer should be forced, I believe, to read Arthur C. Clarke's story Superiority ( https://en.wikipedia.org/wiki/Superiority_%28short_story%29 ) and to reflect on its application to their profession.

EDIT: Story can be found here... http://www.mayofamily.com/RLM/txt_Clarke_Superiority.html


> Story can be found here

Well worth a few minutes. Not that teaching it would help much, wisdom rolls off people's minds like water off a duck's back.


Only if the new thing legitimately and very clearly solved a real problem that presented a credible barrier to work. That's a very high bar to clear when creating a new programming language.


This article isn't exactly wrong. Certainly, running on your target platform and having library support for the things you're trying to do are critical features for getting anything done, and a great language that lacks these things in the wrong tool for the job. That doesn't mean criticism of bad design choices in, say, Javascript is mistaken, or as the author describes it, "troubling". It just means you probably have to use Javascript anyway[0].

It also leaves out another reason for learning languages and using them for pet projects: it makes you a better programmer. The more good languages you know, and idioms from those languages, the more likely you are to recognize when an ad-hoc implementation of one of those idioms is the right solution to a problem in the language you're actually using.

[0] Though possibly only as a compilation target.


When doing serious development and hitting these issues, the workaround isn't to continue using a broken language. The workaround is to use a different language. The first question 'Does this language run on the target system that I need it to?' isn't a yes or no question.

Take a look at this example - http://blog.fogcreek.com/the-origin-of-wasabi/

A language that compiles to PHP and ASP, what a relief.

And for the contemporary result - http://blog.fogcreek.com/killing-off-wasabi-part-1/

When the platform catches up, then you can go back to mainstream development with a useful language.


Sometimes you need to write a compiler.


Even something as blatantly broken as the pre-ES6 scoping rules in JavaScript isn't the fundamental problem it's made out to be. It hasn't been stopping people from making great things with the language.

No, it hasn't been stopping them, but I guarantee you it's been slowing them down, at least a little. If nothing else, it makes the language a little bit harder to learn than it needed to be. I'll wager it also causes actual bugs that people have to spend time tracking down. It's true that those bugs can be avoided by proper discipline, but the brain cells required for enforcing that discipline could have been used for something else.

ETA: I agree with the author that a certain pragmatism is useful in selecting a language for a particular project, but I still think it's important to raise people's consciousness about warts in language designs. Doing so improves the odds that the next language someone designs to scratch their personal itch, but that happens to catch on for some reason, will have fewer such warts.


Over 90% of the first 100k LOC I wrote was ES5 JavaScript or CoffeeScript. I don't actually recall even a single instance when I was bitten by a bug related to lexical scope on the job. Maybe the problem is people expecting JavaScript to work like Java or some other block scoped language.

Async bugs, on the other hand, were nightmarish at times.


Did you write the same 100 klocs of code in a language with lexical scoping before coming to js?


Another 100 klocs of code before my first 100 klocs of code? That wouldn't have been possible...


Has it been slowing them down as much as debugging new frameworks and build pipelines?

Is it harder to learn a smaller thing with some tricks than a larger thing with few tricks?

Those are two questions I often ask myself when thinking about the evolution of the JS ecosystem.


matlab? custom function was (or still is?) a one-function-one-file. i had bunch of function-files in a project directory. my mind was literally scattered.

then i moved on to languages that allow binding function to variable. i had much less files. simpler.

FP with anonymous function further frees my mind from naming variables so i have zero chance of mistyped function names. easier maintenance? sure

those didn't stop me from getting work done; however, i prefer to not waste time on weaker programming languages, although coding on those languages did broaden my mind (yeah, now i know they suck)


Kinda mean to Ruby.

Also misses the point, it's not the job at hand that matters, it's the 10000 jobs that keeping the thing alive that matters.


Maybe. but actually what's important is investment cost versus the ratio of supply vs demand.

There are lots of Java jobs, yes, but there are also lots of Java programmers too. I'm a python dev (now) and while I have to search a little longer for jobs, I get good pay still, since I'm also rarer.


I think that what is important is how easy code is to maintain and how easy it is to move from 60 modules to 6000 modules (and beyond). Running a big code base in PHP is very, very difficult (I have done this, it was not a good experience) running a big code base in Java is better (I have done this too, it wasn't that great but some of the information hiding and abstractions in Java seemed to help partition the issues from one another). I have never run a large code base over a number of years in Python, I expect it's rather like doing things in Java. I feel strongly that if I had to run a large Julia code base (in about 2 years when it goes gold) then things might be much improved and much less make work might arise.


What do you mean by "goes gold"? Like 1.0?


Yeah - when they (the dev's) underwrite that they believe that their original intent is fulfilled - 1.0




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: