Sorry for the Kumu 500s! I moved some critical parts of our service away from EC2 classic on Saturday morning. One of our servers was accidentally recreated with an 8GB volume instead of 1TB and the HN hug caused a process to blow through the remaining disk space with indexes. Should be back to working order now!
This feels a lot like Vim's built in file explorer (netrw).
I find these kinds of text based, keyboard centric explorers to be far superior for navigating around a codebase, then giving you back your screen space as soon as you're found what you're looking for.
Building network & viz tools for tackling complex problems.
React, TypeScript, Rails, Node, Postgres, CouchDB, AWS. We're currently undergoing an ambitious modernization process for our main codebase, and looking for engineers who like a challenge.
Bonus points if you're happy getting your hands dirty with AWS, Linux, and virtual machines.
We're a small and flexible remote team who place a lot of value on physical and mental health, great lifestyles, and doing meaningful work.
Looking for enthusiastic developers to help us build tools for tackling complex problems.
React, TypeScript, Rails, Node, Postgres, CouchDB, AWS. We're currently undergoing an ambitious modernization process for our main codebase, and looking for engineers who like a challenge.
Bonus points if you're happy getting your hands dirty with Linux, Packer, and virtual machines.
We're a small and flexible remote team who place a lot of value on physical and mental health, great lifestyles, and doing meaningful work.
Doesn't feel unreasonable to conflate the two. npm is the dominant way to handle dependencies for frontend and backend, even some of the most popular script-tag delivery CDNs are powered by npm (unpkg, skypack).
Precisely. Which is why it's NPM's dependency problem, not JS's. Avoid NPM and no dependency problems (well, fewer anyway).
Liked your no tool article, but I laughed at this bit: "Traditionally, using other people's code has required some combination of npm and yarn to download those modules". NPM is only 12 years old (and Yarn is newer) — that's less than half the age of JS. No, 'traditionally' you just viewed the source and copied the code you wanted, or went to a project page and downloaded the script.
Why is it great to be able to publish a package quickly? You may be a smart programmer who only releases production quality, bug-free, vulnerability-free code, but is it a good thing that it is easy for inexperienced developers or malicious users to be able to publish packages at the same ease/speed?
No one wants an ecosystem with only jQuery, but there's a middle ground somewhere before you get to 2 million packages. Competing frontend frameworks fit comfortably within that space. I just don't want a world where there are 16 competing packages that all implement a slider in React.
There's a huge difference if you only use npm for personal projects, too. The consequences of picking unmaintained/undocumented/insecure/buggy software are much, much lower if you can afford to rewrite/throwaway in a weeks time.
I don't think we talk enough about the downsides of DRY.
Reinventing the wheel for the sake of reinventing the wheel (not-invented-here) is a problem, but reinventing it for the sake of learning more about wheels is a big deal.
I suppose many developers reach for libraries because they're more confident that the libraries will implement things correctly/more efficiently than they could. But if they keep reaching for libraries (instead of trying to write an `isEven` function themselves) then they never really improve either.
Most developers work under constant time pressure and usually with a fixed budget. Choices have to be made. Sometimes ugly code will creep in, because doing better is simply not worth the investment. Many projects don't have a very long lifespan anyway.
Pet project usually get a lot more attention, but rewriting existing libraries is not exactly fun and will require maintenance when made public, so that does not happen often.
> Give better cues that we are in an asynchronous mental model
The `async` keyword(!) is objectively a clearer signal that the code in question is asynchronous. That's why type-checkers use it to prevent you from doing dumb stuff like awaiting in a synchronous function.
> In simple cases express the code at least as cleanly as async/await
It's pretty hard to see much of an argument here. How can the promise version ever be seen as "at least as clean"?
await someTask()
// vs
someTask().then(() => ...);
Even beyond the syntax here, the promise forces you into at least one level deep of nesting, which instantly makes it much trickier to manage intermediate variables, which either need to be passed along as part of the result (usually resulting in an unwieldy blob of data) or the promises need to be nested (pyramid of doom) so that inner callbacks can access the results from outer promises.
> Provides a much cleaner option for more complex workflows that include error handling and parallelisation.
If you've somehow come to the conclusion that `Promise.all` doesn't work with `async/await` then you have probably misunderstood the relationship between `async` functions and promises. They're the same thing. Want to parallelise a bunch of `await` statements? You can still use `Promise.all`!
I do occasionally find try-catch to be awkward, but that's because it creates a new lexical scope (just like promise callbacks do). I also think the consistency from having one unified way to handle errors in sync/async contexts justifies it.
My take is that the lack of experience for the average JavaScript developer is absolutely a factor here. I don't think it's the only factor though. Here are some of the other pieces of the puzzle.
JavaScript's standard library is so thin on the ground that there's already a culture of "reaching for a library" to accomplish tasks that many languages do out of the box.
The monoculture is wide enough that the language caters to lots of paradigms and schools of thought. If there's one library that uses classes and method chaining, you can be sure that another will pop-up to re-implement the same functionality in a pure functional style. One will focus on type safety and another will abuse the dynamic bits of the language to make the code you write as terse as possible.
Amount of code shipped has always been a more important metric for JS than other languages because the nature of the web means that users have to wait whilst the source code is downloaded before your page becomes interactive (for a huge class of applications). This encourages developers to favour smaller libraries that solve for narrower problem domains.
It's become very trendy to write a smaller, faster, better, smarter version of existing libraries. The JavaScript community loves the process of picking a catchy name, registering a domain, designing a logo, and publishing packages as though they were businesses. This creates an abundance of packages that look great on paper, but with no users, patchy/non-existent tests and maintainers that haven't ever used the code in a professional context.
Finally, I think JavaScript is a deceptively simple language. It doesn't take very long before people (mistakenly) think they're close to mastering the language. By comparison, contributing to an open source project in a meaningful way is quite difficult, so these developers assume that other libraries must be written badly if they find it hard to contribute. Then they write their own, because they believe they can do a better job.
The ecosystem as a whole sees a lot of innovation, and pays for that with a lot of churn and a lot of dependencies. From a theoretical standpoint, it's a fascinating corner of modern programming. In a professional context, it horrifies me and I wish I could sanely cut npm out of the chain.
I have played with them before, but I haven't properly tried them for this. I can see it solves half the problem (being able to open something twice). But there's still a lot left to be solved:
1. I need to check out ~8 branches
2. I then need to reinstall all dependencies for those branches.
3. I then need to run those branches locally on a different set of ports to my standard dev environment to avoid clashes (or shutdown my main one temporarily)
4. If it involves the mobile app I may need to wait some time for a clean build
It's a lot of administrative overhead compared to what I tend to default to in most cases which is just merge the branch after reading through it and then test in our staging environment once CI has run.