Humans aren't deterministic. I've trusted junior engineers to ship code. I fail to see a significant difference here in the long term.
We have engineering practices that guard against humans making mistakes that break builds or production environments. It isn't like we are going to discard those practices. In fact, we'll double down on them. I would subject an LLM to the level of strict validation that any human engineer would fine suffocating.
The reason we trust compilers as a black box is because we have created systems that allow us to do so. There is no reason I can see currently that we will be unable to do so for LLM output.
I might be wrong, time will tell. We're going to find out because some will try. And if it turns out to be as effective as C was compared to assembly then I want to be on that side of history as early as possible.
Exactly, which is why I would want humans and LLMs to write maintainable code, so that I can review and maintain it, which brings us back to the original question of which programming languages are the easiest to maintain...
Well, we're in a loop then because my response was "you don't care about maintainable assembly".
I want maintainable systems you want maintainable code. We can just accept that difference. I believe maintainable systems can be achieved without focusing on code that humans find maintainable. In the future, I believe we will build systems on top of code primarily written by LLMs and the rubric of what constitutes good code will change accordingly.
edit: I would also add that your position is exactly the position of assembly programmers when C came around. They lamented the assembly the C compiler generated. "I want assembly I can read, understand and maintain" they demanded. They didn't get it.
We're stuck in a loop because you're flip flopping between two positions.
You started off by comparing LLM output to compiler output, which I pointed out is a false equivalence because LLMs aren't as deterministic as compilers.
Then you switched to comparing LLMs to humans, which I'm fine with, but then LLMs must be expected to produce maintainable code just like humans.
Now you're going back to the original premise that LLM output is comparable to compiler output, thus completing the loop.
There are more elements to a compiler than determinism. That is, determinism isn't their sole defining property. I can compare other properties of compilers to LLMs. No "flip flop" there IMO, but your judgment may vary.
Perhaps it is impossible for you to imagine that LLMs can share some properties with compilers and other properties with humans? And that this specific blend of properties makes them unique? And that uniqueness means we have to take a nuanced approach to understanding their impact on designing and building systems?
So lets lay it out. LLMs are like compilers in that they take high level instructions (in the form of English) and translate it into programming languages. Maybe "transpiler" would be a word you prefer? LLMs are like humans in that this translation of high level instructions to programming languages is non-deterministic and so it requires system level controls to handle this imprecision.
I do not detect any conflict in these two ideas but perhaps you see things differently.
> There are more elements to a compiler than determinism.
Yes, but determinism is the factor that allows me to treat compilers as a black box without verifying their output. LLMs do not share this specific property, which is why I have to verify their output, and easily verifiable software is what I call "maintainable".
An interesting question you might want to ask yourself, related to this idea: what would you do if your compiler wasn't deterministic?
Would you go back to writing assembly? Would you diligently work to make the compiler "more" deterministic. Would you engineer your systems around potential failures?
How do industries like the medical or aviation deal with imperfect humans? Are there lessons we can learn from those domains that may apply to writing code with non-deterministic LLMs?
I also just want to point out an irony here. I'm arguing in favor of languages like Go, Rust and Zig over the more traditional dynamic scripting languages like Python, PHP, Ruby and JavaScript. I almost can't believe I'm fighting the "unmaintainable" angle here. Do people really think a web server written in Go or Rust is unmaintainable? I'm defending my position as if they are, but come on. This is all a bit ridiculous.
> How do industries like the medical or aviation deal with imperfect humans?
We have a system in science for verifying shoddy human output, it's called peer review. And it's easier for your peers to review your code when it's maintainable. We're back in the loop.
> Do people really think a web server written in Go or Rust is unmaintainable?
Things are not black and white. It will be less maintainable relatively speaking, proper tool for the job and all that. That’s why you will be left in the dust.
Yes it really does. npm has namespaces (called scoped packages) and even explicitly encourages their use for private packages to avoid this sort of attack. From the npm docs: "A variant of this attack is when a public package is registered with the same name of a private package that an organization is using. We strongly encourage using scoped packages to ensure that a private package isn’t being substituted with one from the public registry." [1]
> This gives exciting opportunities for you to register cal-home to trap users who miss type, or caII-home to innocuously add to your own or open source projects or whatever. Fun isn't it?
npm actively blocks typo-squatting attacks during the publishing process: "Attackers may attempt to trick others into installing a malicious package by registering a package with a similar name to a popular package, in hopes that people will mistype or otherwise confuse the two. npm is able to detect typosquat attacks and block the publishing of these packages." [1]
This thread is full of people demonstrating the concept of confirmation bias.
Given the popularity of Github, and the fact that a readme file is the first thing you see when pulling up a project on Github, most projects these days do in fact have readme files.
Not sure why this is considered a "classic" piece. It reads as if the author has just discovered the difference between preemptive vs cooperative scheduling, but hasn't yet found the words to describe his "discovery". Yes, you can write a `while(true){}` loop and block the event loop. That's not some damning indictment of Node. The point is that you don't have to block on IO, so your program doesn't have to halt the entire world, and sit around doing nothing while you're waiting for a hard drive to spin or a network request to complete.
The Prisma engine is written in Rust (and the original product was written in Scala), so your snide comment is actually a bit inaccurate. You've also ironically failed to spell JavaScript using the correct casing.
> I never let JS guys work on backend code, because they are lost if they cannot find a module online that does what they are asked to do, or is missing half of the features from the spec it promised to implement (always the hard ones).
Stop self-aggrandizing. A huge percentage of backend devs would be lost without their giant MVC frameworks holding their hands throughout the dev process. Many of them do nothing more than write little controller functions while they lean on their frameworks, authentication modules, routing libraries, ORMs, and templating engines to do the hard work. You're not sending rockets to Mars after learning how to spin up a Django c.r.u.d. app.
Were you using Typescript with Node? I really can't imagine working on a project that's even moderately complex without static typing. Having a static type checker is like having an assistant that writes automated tests on its own, and having those tests constantly running in the background.
Refactoring code without static type checking is a nightmare. If you're using a strict statically typed ORM like Prisma [1], you can change your data model then run tsc --noEmit --watch in your terminal and the compiler notifies you of every error with their associated file names and line numbers. In VSCode you can even click the line number and it'll open the specific file where the error occurred. Before adopting Typescript, I basically never attempted large refactors, and when I did it consisted of a long process of grepping for specific symbols in the project, and manually writing lots of tedious and verbose tests to do a fraction of what the compiler does automatically.
I remember developers from the Elm community years ago constantly advocating for static typing, and thinking "this has to be hyperbolic". Richard Feldman gave a talk about how he wrote code on an airplane for hours without an internet connection, and without actually running the code, and how the compiler ensured that the code just worked [2]. Typescript is not as sound as a Hindler-Milney type system, but the benefits are similar (especially if you're running Typescript in strict mode).
Or in other words, the presence of a static typing ecosystem is one of the advantages Node has over Elixir. So while the original commenter might not care that Elixir lacks static typing, they should care because it increases productivity.
The only reason I would want types is for specific error types that I could ideally match against instead of them being raised (in Node.js or Typescript).
In Elixir you have the option of calling functions that raise vs functions that return an error tuple so there's no contention there (the difference between function() (this does error tuple) and function!() (this raises).
Elixir also has this idea of a typespec and I have never felt I needed more, personally.
Typespecs in Elixir (combined with Dialyzer) give you a very limited version of what TypeScript offers. From what I understand Dialyzer is designed to only raise an error when it's absolutely certain that you're mistaken about the types in your program. While this does prevent Dialyzer from complaining about false positives, it only catches the most trivial of bugs [1].
Also even with Elixir's typespecs, static typing is only fully useful if the entire ecosystem around a language embraces it. If some of the libraries you use don't ship with type definitions, it's going to be difficult for any static analysis tool to do a decent job. Almost every major JS library either ships with its own types, or has quality third party types available via Definitely Typed. It doesn't seem like the Elixir community has really embraced static typing to the same extent that TypeScript developers have. The creator of Phoenix for example hates dialyzer [2].
I was referring to Elixir's blog post on static types where the creator of Elixir José Valim wrote: "The Dialyzer project, implemented in Erlang and available for Elixir projects, chose to have no false positives. However, that implies certain bugs may not be caught. At this point in time, it seems the overall community would prefer a system that flags more potential bugs, even if it means more false positives." [1]
From my experience with TypeScript, you really want your type checker to be eager about complaining about type issues, which is why so many people turn on strict mode. In fact, occasionally I'll have to suppress a false positive error with a @ts-expect-error comment.
But even assuming Dialyzer is as good as TypeScript, have you found that the libraries you use actually have quality type definitions? The few times I've used a JS library without type definitions it ended up breaking type inference, and infecting the codebase with the any type, which renders the type checker basically useless. How has your experience been in regards to that with Elixir?
> have you found that the libraries you use actually have quality type definitions?
I have some experience with Typescript and it was way better than vanilla JavaScript. Typescript mappings we’re available for most libraries, though for a couple they were not up to date. Similarly for Elixir, the coverage for nearly all APIs is pretty good but also has some holes. In my experience it hasn’t been a big deal either. Any serious library developer puts types alongside their function signatures/implementation.
> The few times I've used a JS library without type definitions it ended up breaking type inference, and infecting the codebase with the any type, which renders the type checker basically useless. How has your experience been in regards to that with Elixir?
Being a functional language, Dialyzer can easily make inferences on untyped code based on ‘success typing’ (what Jose talks about), though it can make for some very obtuse and verbose warning messages. This inference is in addition to safety provided by typespecs which every Elixir library developer typically intermingles with their API surface.
If I would level criticisms at dialyzer it would be its sometimes difficult to read warnings, it’s speed (despite being multithreaded) and the race conditions in the VS Code plugin (which is looking for extra maintainers – if I had time I would help). For the weird warnings I find I just need to add extra typing to my code to narrow it down, which I should be doing anyway
> If I would level criticisms at dialyzer it would be its sometimes difficult to read warnings, it’s speed (despite being multithreaded) and the race conditions in the VS Code plugin (which is looking for extra maintainers – if I had time I would help).
One of the advantages of TypeScript is that VSCode is written in TypeScript, and both VSCode and TypeScript are developed by the same company, so there's a really nice synergy there. I imagine Kotlin users feel the same way using Jetbrains products, and Swift users feel the same way about XCode.
Dialyzer looks interesting, but I can't imagine giving up on the expressiveness of TypeScript. Some of the things you can do with generics, mapped types, intersection types, template literal types, conditional types, and utility types are almost mind boggling. It's difficult to reap all of the benefits of static analysis without some of these advanced type operators. The type manipulation section of the TS manual is really underrated.
Someone for example wrote an SQL parser in TypeScript that requires no runtime code [1]. It can infer the types of an SQL query's result based on an SQL string without any runtime code execution. There was a similar project where someone built a JSON parser entirely using the type system [2]. There's also an ongoing discussion on Github about the the fact that TypeScript's type system appears to be a Turing-complete language with some other cool examples [3]. My point is that the type system is incredibly expressive. You rarely run into an idiom that can't be typed effectively.
The SQL stuff sounds cool, but Ecto is so expressive I don’t even need to resort to raw SQL like I do in other languages. Elixir typing can handle a subset of intersection type which is rather niche, and of course handles union types which are more prevalent. Genetics and inheritance are replaced with a more permissive style of polymorphism with Elixir behaviors and are also part of the typing system.
All languages have their own flavor, and their own pros and cons, and if the advanced types of Typescript work for you, then great! In my own experience, I have found Elixir and it’s typing capabilities to work well for me
> The SQL stuff sounds cool, but Ecto is so expressive I don’t even need to resort to raw SQL like I do in other languages.
The SQL example I linked to isn't something you'd use to interact with a database in production, for that you'd probably reach for an ORM like Prisma. I was just trying to demonstrate the level of type inference you can achieve with TS. Going from a pure string of SQL or JSON to a concrete type without actually executing any runtime code is pretty crazy.
> Elixir typing can handle a subset of intersection type which is rather niche
I personally use intersection types quite a bit. If union types are like the logical or operator, then intersection types are like the logical and operator. Being able to define a type that combines one type and another is not a niche workflow for me.
> In my own experience, I have found Elixir and it’s typing capabilities to work well for me
Can't argue with that! Everyone has their preferences.
> Going from a pure string of SQL or JSON to a concrete type without actually executing any runtime code is pretty crazy.
Going from a JSON string to a type for it is actually one of the easier examples of inference I can imagine. JSON is a data description format in which all the base types are syntactically distinguishable, it has no variables, no arrows (i.e. functions), no generics. In the topic of type inference, you can't have a much easier example.
SQL is more complex, indeed, but still doesn't seem too crazy if you have access to table schemas. It's also a matter of whether triggers and stored procedures are taken into account, but I assume they're not.
There's a lot of prior art described in literature as well as practical programming implementations with much crazier, yet successfully working type inference.
I just want to make sure we're on the same page here. The JSON example I linked to isn't inferring the types of JSON that's already been parsed and deserialized, that would be trivially easy in any language (including TS). If I have an object that's been parsed from JSON, I can just use the typeof operator in TypeScript to infer the type of that object.
The example I linked to is taking a serialized JSON string, and parsing the literal characters in the string (characters like double quotes, commas, whitespace, etc) into a type, purely using type annotations. And the structure of that JSON can be of arbitrary nested depth.
All of this is accomplished using template literal types which allow you to make assertions about the contents of a string. In TypeScript you can assert more than just "this value should be a string". You can make detailed assertions about the structure of strings, and that's what allows these parser demos to exist.
When you combine these template literal types with recursive types, conditional types, and TypeScript's infer keyword you can do some pretty interesting type level programming.
Just to further demonstrate the point, there's an interpreter
for the BF programming language, written entirely using TypeScript type annotations [1].
> There's a lot of prior art described in literature as well as practical programming implementations with much crazier, yet successfully working type inference.
I’m not the GP, but already Elixir can already accomplish compile time text processing with metaprogramming (like it does for inline html templating for ‘heex’ functions and files) and it’s not a huge stretch for it to be able to call a JSON parser on any string knowable at compile time and convert it into a type. That it hasn’t been done yet is probably because no one has deemed to worthwhile to implement. It does sounds cool though.
Metaprogramming is also why generics aren’t really needed in Elixir or Erlang. All specializations boil down to their reified types using macros
> I’m not the GP, but already Elixir can already accomplish compile time text processing with metaprogramming (like it does for inline html templating for ‘heex’ functions and files)
It's not the compile time text processing that's interesting, it's the fact that this compile time code can yield incredibly detailed types, and the fact that it's all done purely using type annotations. Almost every language that's used to build websites has some sort of server side templating language where you can sprinkle some code into your html templates.
> That it hasn’t been done yet is probably because no one has deemed to worthwhile to implement.
Maybe, or maybe it's less feasible without an advanced type system. The final version of the JSON example we're talking about ended up being 61 lines of code. Without some concrete Elixir code to inspect, this conversation is becoming very abstract.
Honestly even if some of this were possible with metaprogramming, my intuition is that it would be much more verbose and complex. I feel like if I tried to implement all of the features of TypeScript by mucking around with an abstract syntax tree using metaprogramming, I would end up with a difficult to maintain crude approximation of the real thing. I don't think we're giving compiler developers enough credit by saying everything they've worked on can be replaced with a quick macro. José Valim's blog post on static typing ended with an announcement that they've sponsored a PhD student to work on these problems.
After two days of discussion, I think we're reaching a bit of an impasse. Honestly just use what works for you!
> It's not the compile time text processing that's interesting
You've talked right past me. Metaprogramming is not compile time text processing.
> Honestly even if some of this were possible with metaprogramming, my intuition is that it would be much more verbose and complex.
No. It's probably even how the Microsoft team are achieving the typing you're talking about. The special sauce here isn't as special as you think.
> I think we're reaching a bit of an impasse
Not really. I was just saying that what you're talking about is totally possible with metaprogramming. Not making some rhetorical play.
> José Valim's blog post on static typing ended with an announcement that they've sponsored a PhD student to work on these problems.
That's specifically because strict static typing with guards^ is at least difficult, but maybe impossible, and that's what the PhDs are trying to figure out. However, dialyzer isn't as hobbled as you imagine/purport it to be – the crux of my counterargument – and actual experience trumps speculation or casual reading on the topic.
FYI, guards and pattern matching are one of the reasons why Elixir is so damn expressive, and this kind of function typing isn't available/possible in most non-functional languages
> You've talked right past me. Metaprogramming is not compile time text processing.
I'm not talking past you. I never said metaprogramming is compile time text processing. You said "Elixir can already accomplish compile time text processing with metaprogramming", and I was just pointing out that the text processing itself is not the most interesting part of the example, it's the resultant types.
> No. It's probably even how the Microsoft team are achieving the typing you're talking about. The special sauce here isn't as special as you think.
But I don't have to reimplement any of this, because Microsoft has already written it. With enough time maybe you could implement dependent typing using metaprogramming for example, but would you then say that Elixir is just as good at dependent typing as Idris, which has that feature built in?
> I was just saying that what you're talking about is totally possible with metaprogramming.
You've graduated from saying "I think someone more savvy with Elixir would know more." to "totally possible"[1]. This does not sound like an argument from experience.
> That's specifically because strict static typing with guards^ is at least difficult, but maybe impossible, and that's what the PhDs are trying to figure out.
A lot of what you're talking about has already been implemented in other languages. This is not untrodden ground. There's no syntactic sugar for guards in JS (which TS just adds type annotations to), but semantically it's very similar to type narrowing in TS [2], which I use daily. This is not something I've read about casually on the internet.
> This is not something I've read about casually on the internet
Was speaking specifically about your opinions on Dialyzer and Elixir. It's very much a cursory Google around and having stronger opinions about the limits of a language than people who spend time with the language.
> You've graduated from saying "I think someone more savvy with Elixir would know more."
It took further reflection on the idea. It helps to start with at least some epistemic humility, once again the crux of this thread. I'm willing to revert to, "It ought to be possible" given that macros boil down to the final generated code (including typespecs) which is then analyzed by Dialyzer
> Elixir is just as good at dependent typing as Idris, which has that feature built in?
Metaprogramming allows features to become 'built in'. e.g., being able to integrate type checking and compiler errors for HTML/XML or other DSLs.
> Was speaking specifically about your opinions on Dialyzer and Elixir. It's very much a cursory Google around and having stronger opinions about the limits of a language than people who spend time with the language.
I think your experience with Elixir and Dialyzer (or perhaps emotional attachment to these tools) is blinding you to the fact that there are features that your favorite language lacks. I gave you a concrete example of something that's a research topic in Elixir (inferring types from guards), which has an existent analogue in TS (type narrowing). You completely ignored that and fixated on the one sentence about experience. Your entire argument is that I should defer to your authority on the topic as an anonymous commenter, because you're supposedly very experienced. It feels like a very hollow appeal to authority.
> It took further reflection on the idea.
But by your standards simple reflection is not enough to establish authority on a topic. You need to have deep personal experience with using macros to parse text before you're qualified to make a firm judgement. You event went so far as to speculate on how TypeScript implements type checking and claimed that the "secret sauce" is probably metaprogramming, despite having never worked on the compiler. This all feels incredibly contradictory.
> Metaprogramming allows features to become 'built in'. e.g., being able to integrate type checking and compiler errors for HTML/XML or other DSLs.
Built in does not mean "I might hypothetically be able to write my own macro that does this". It means it already exists in the language today.
Honestly this discussion feels like it's devolved into repetitive bickering, which is why I let your original comment about metaprogramming stand unanswered, and you reacted by following me into a different comment thread to reiterate the point.
You've really misjudged me. I've been programming professionally for 20 years, I have used and still use a variety of languages including Typescript.
> Your entire argument is that I should defer to your authority
Never made such an argument. My argument is that you have less authority on the subject matter than someone who has spent years with the language.
> I gave you a concrete example of something that's a research topic in Elixir (inferring types from guards), which has an existent analogue in TS (type narrowing)
At best a subset of functionality than an analogue. This kind of research is being conducted by experts in type theory outside of Elixir. If it's a problem for Elixir, it's a problem for any other language that would attempt it, and absolutely I would defer to the authority of those experts who have spent years looking at type theory.
> despite having never worked on the compiler
Knowledge of meta-programming doesn't require the same skillset as writing a compiler. I'm certain you or any other capable software engineer, would be able to write a macro that parsed JSON, interrogated the typing of that JSON, and spat out a typespec with some knowledge of Elixir `defmacro`, `quote`, and `unquote`.
> Built in does not mean "I might hypothetically be able to write my own macro that does this".
Never made such a claim. My claim is that the JSON typing mechanism isn't really something that's dependent on the Typescript language and toolchain. As with my link on my last post of inline heex components, it's possible to take a text representation of something in Elixir and transform it into something that's available to the compiler (and hence also available to Dialyzer).
> Going from a pure string of SQL or JSON to a concrete type without actually executing any runtime code is pretty crazy.
Elixir/Erlang might already be able to do something like this with metaprogramming. It’s certainly possible to generate and run Elixir at compile time, and map types are already a good superset for JSON objects, so a compile time JSON to map could then provide an inferrable type. I think someone more savvy with Elixir would know more. I’d certainly not something that I’ve needed.
> But even assuming Dialyzer is as good as TypeScript, have you found that the libraries you use actually have quality type definitions?
Strictly speaking, Dialyzer has two big benefits over more traditional type systems:
1. It doesn't require type definitions to be present, it can type check code having none at all.
2. "Dialyzer is never wrong", i.e. it never returns false positives, i.e. it's an underapproximating checker.
These are the design decisions taken by the tool authors. There's rationale behind them, though the experience of using checkers for other languages might not be 1-to-1 applicable because of that. These decisions come with downsides, of course, some of which are covered in this thread, so I won't repeat them, but in general they were taken deliberately due to characteristics of the language(s) (it was originally created for Erlang, not Elixir) and the state of type system theory at the time. Please bear in mind Dialyzer was created in 2006, some 6-7 years before TypeScript.
If you're into trying out static typing in Elixir, please check out https://github.com/esl/gradient. It's still experimental, but already functional. We're happy to get any feedback or, better yet, contributions.
Very few languages operate under the same constraints as js. When you ship js you can't guarantee the version of Ecmascript that the client will be running, or the standard library of DOM functions that will be available (which differ slightly from browser to browser), so you end up transpiling your code to the least common denominator.
You also have completely different performance requirements compared to most other languages. If I ship a python app I don't have to worry about reducing the length of variables names to shave off a few bytes, or bundling multiple files together to reduce the number of http requests. Other languages don't need to dynamically load code via http requests, they generally run under the assumption that all of the code is available before execution.
The closest comparison outside of the browser would be to the container ecosystem, which also runs code in an environment agnostic way, and there's plenty of complexity and volatility there (podman, buildah, docker, nerdctl, k8s, microk8s, k3s, k0s, nomad, docker swarm, docker compose, podman compose, et cetera).
> The closest comparison outside of the browser would be to the container ecosystem
And as someone who has worked on both, I can tell you that the container ecosystem is way better and way more deterministic. `Dockerfile` from 10 years back would work today as well. Any non-trivial package.json written even a few years ago would have half the packages deprecated in non-backward compatible way!
There is another similar ecosystem of mobile apps. That's also way superior in terms of the developer experience.
> Other languages don't need to dynamically load code via http requests, they generally run under the assumption that all of the code is available before execution.
And that's not what I am objecting to.
My concern is that the core JS specification is so barebones that it fragments right from the start.
1. There isn't a standard project format
2. There isn't a single framework that's backward compatible for 5+ years.
3. There isn't even an agreeement on the right build tools (npm vs yarn vs pnpm...)
4. There isn't an agreement on how to do multi-threaded async work
You make different choices and soon every single JS project looks drastically different from every other project.
Compare this to Java (older than JS!) or Go (newer than JS but highly opinionated). People writing code in Java or Go, don't expect there builds to fail ~1-5% of the times. Nor are the frameworks changed in a backward-compatible way every few years.
> `Dockerfile` from 10 years back would work today as well.
I highly doubt that any Dockerfile from back then would work if it runs `apt-get` (as many do), as the mirrors for the old distribution versions aren't online anymore.
Dockerfiles can be made to be quite deterministic, but many use `FROM` with unpinned tags and install from URLs that can and do go away.
Exactly! Dockerfiles are not deterministic. The build artifacts that they produce (images) are, but the same could be said of js build artifacts (which would be a set of compiled and bundled js files).
Having worked on package management in all the verticals you’ve mentioned, none of what you said is true.
Packages in most ecosystems are fetched over HTTP and those packages disappear. If you’re lucky those packages are stored in a centrally maintained repository like npm, distro repos, etc. If you’re unlucky it’s a decentralized system like early go where anyone can host their own repo. Anyone running builds at scale have caches in place to deal with ecosystem weirdness otherwise your builds stop working randomly through the day.
Re: Go, good luck getting a go package from 10 years back to compile, they directly addressed the repository the code lived in! This was a major problem for large projects that literally failed and were abandoned half way through the dev cycle because their dependencies disappeared.
Re: Docker - Good luck with rerunning a glorified series of shell scripts every build. There’s a reason we stopped doing ansible. When you run simple shell scripts locally they seem infallable. Run that same script over 1000s of consecutive builds and you’ll find all sorts of glorious edge cases. Docker fakes reproducibility by using snapshots at every step, but those are extremely fragile when you need to update any layer. You’ll go to rebake an image from a year ago to update the OS and find out the Dockerfile won’t build anymore.
Apt is a glorified tarball (ar-chive) with a manifest and shell scripts. Pkg too. Each with risks of misplacing files. *nix systems in general all share a global namespace and YOLO unpack an archive followed by running scripts with risk of irreversibly borking your system during an update. We have all sorts of snapshotting flows to deal with this duck tape and popsicle stick approach to package management.
That package management in pretty much any ecosystem works well enough to keep the industry chugging along is nothing short of a miracle. And by miracle I mean many many human lifetimes wasted pulling hair out over these systems misbehaving.
You go back and read the last two decades of LISA papers and they’re all rehashing the same problems maintaining packages across large systems deployments with little real innovation until the Nix paper.
> And as someone who has worked on both, I can tell you that the container ecosystem is way better and way more deterministic. `Dockerfile` from 10 years back would work today as well. Any non-trivial package.json written even a few years ago would have half the packages deprecated in non-backward compatible way!
As I wrote elsewhere [1], Dockerfiles are not deterministic. The build artifacts that they produce are deterministic, but that would be comparing a build artifact to a build system.
> There is another similar ecosystem of mobile apps. That's also way superior in terms of the developer experience.
Mobile app users have different performance expectations. No one bats an eye if a mobile app takes several minutes to download/update, but a website that does so would be considered an atrocity.
> And that's not what I am objecting to. My concern is that the core JS specification is so barebones that it fragments right from the start.
JS is actually really well specified by ECMA. There are so many languages where the formal specification is "whatever the most popular compiler outputs".
> You make different choices and soon every single JS project looks drastically different from every other project.
The same could be said of any other moderately complex project written in a different language. Look at the Techempower benchmarks for Java, and tell me those projects all look identical [2].
> 1. There isn't a standard project format 2. There isn't a single framework that's backward compatible for 5+ years. 3. There isn't even an agreeement on the right build tools (npm vs yarn vs pnpm...) 4. There isn't an agreement on how to do multi-threaded async work
A lot of the complexity you're describing stems from running in the browser. A server-side js project that returns plain html with a standard templating language is remarkably stable. Express has been on version 4.x.x for literally 9 years [3]. Package.json is supported by yarn, npm, and pnpm. As long as you have a valid lock file and install dependencies using npm ci, you really shouldn't have too many issues running most js projects. I'm not sure what issues you've had with multi-threaded async. The standard for multi-threading in js is web workers (which are called worker threads in node). The js ecosystem is not like Scala or Rust, where's there's tokio and akka. JS uses promises for concurrency, and workers for parallelism.
>Mobile app users have different performance expectations. No one bats an eye if a mobile app takes several minutes to download/update, but a website that does so would be considered an atrocity.
Well if it updates in my face I'd be pretty annoyed. The mobile app thing only works when they update in the background/transparently.
Well yeah if you had to wait for apps to update before you could use them you'd definitely be annoyed, but the beauty of mobile (and desktop) apps is that users don't expect to constantly be running the latest version of a given app, which means you can slowly update large apps in the background.
When you visit a website you expect to always be running the the latest version of that website. In fact, most users aren't even consciously aware of the fact that websites have versions at all.
> When you ship js you can't guarantee the version of Ecmascript that the client will be running, or the standard library of DOM functions that will be available (which differ slightly from browser to browser), so you end up transpiling your code to the least common denominator.
Isn't that the same as shipping native binaries? You don't know what version OS or libraries it will run on. That's why you do stuff like link with the oldest glibc you want to to support.
The main difference between shipping a binary and a js file, is that users don't expect binaries to be small, which means you can usually ship an entire runtime with your binary. If you shipped every single js polyfill with your website performance would tank. You also generally differentiate between downloading a binary and running it, and users will tolerate a loading spinner while a massive binary downloads.
Webpack will emit a warning if any of your build artifacts are larger than 244KB, whereas a 244KB binary would be considered anemic.
Binaries were definitely leaner in the past, but there's always been that dichotomy between downloading software and running it.
In the browser, users expect software to be available instantly, and that constrains how you build webapps. Users will tolerate the google maps app taking a few minutes to download, but they won't accept the google maps webapp taking several minutes to load in a browser.
A well tested compiler is far more deterministic than an LLM, and can be largely treated as a black box because it won't randomly hallucinate output.