Hacker News new | past | comments | ask | show | jobs | submit | more nfm's comments login

UsabilityHub | Engineering | Remote (Australia or New Zealand) | Full-time, 9 day fortnight, or 4 day week | https://usabilityhub.com/careers

Hey HN, Nick here, CTO and co-founder at UsabilityHub.

We're looking to hire a couple of engineers, with some flexibility on seniority level. We're a remote first and cross functional team, and UsabilityHub is a bootstrapped and successful business that's been focused on sustainable growth over the last ten years.

The team is small, close knit, and has some really excellent engineers and designers. The focus of these roles is building new customer facing functionality - we're effectively building three new products this year so there's lots of greenfields work to do.

The big pieces of our tech stack are Ruby, Rails, Typescript, Postgres. We're not fussy if you haven't used some or any of those, but prefer T-shaped developers that are keen and able to pick new things up quickly and contribute throughout the stack.

For the three roles we have open, starting compensation ranges from $120k AUD to $160k AUD (or NZD equivalents) plus ESOP and profit share. We review this every six months and adjust upwards based on performance in the role.

There's lots more info about these specific roles on our careers page at https://usabilityhub.com/careers. Feel free to reach out to me directly with any questions (email is in profile).


Yep: https://openai.com/blog/march-20-chatgpt-outage

Some kind of concurrency bug in a library they were using to retrieve cached data from Redis led to this leak.

> We took ChatGPT offline earlier this week due to a bug in an open-source library which allowed some users to see titles from another active user’s chat history. It’s also possible that the first message of a newly-created conversation was visible in someone else’s chat history if both users were active around the same time.


Solving this is squarely Twilio's business!

They know how much to bill the customer, so they must know how much it costs to send to a number.


> They know how much to bill the customer

I don't mean to do Twilio's work of defending them, but in my experience it's possible they actually don't know how much to bill the customer. What they may know is the generalized per-minute or per-session rate they've agreed with another operator alongside a general "premium rate numbers will be settled at a later date" kind of clause.

My employer got bit by this several years ago, purely on calls within the +1 country code. Before this practice was largely banned, some small carriers were allowed to designate certain rate centers as higher cost. So our VoIP carrier would say that a call to a given area code was $0.003/minute but the calls would later settle out at $0.25/minute because of a 1,000s block of numbers being (unknowing to us our our carrier) as higher cost and being settlement billed back at the higher rate.

Twilio could agree to carry some or all of this risk for its customers as part of their value-add and fees. That way, Twilio has the incentive to make the proper changes for its customers and would have the experience of looking at all of the return billed rates for all of the calls or messages across its entire customer base to help prevent toll fraud.


This is the case, the telephone billing system is perhaps the most complicated pile of softwareshit you have ever seen in your LIFE - and some of it is insane.

There have been people who got printed bills from their cell phone provider for every single kilobyte of data, each individually indexed and billed: https://en.wikipedia.org/wiki/300-page_iPhone_bill


When I was an intern, I was working for a big company on a project to optimize some call center management. Basically put mainframe reports on an intranet.

The company had made a change of some sort where whatever EDI connection between the telco and the company stopped working. I learned this when and angry facilities guy came up looking for my boss’s boss to sign for a delivery. I was the only person there, so I did. 15 minutes later, two pallets of bankers boxes came up - thousands of single sided pages of itemized call details.


My favourite was getting charged for an sms my iPhone sent which was a phone home to an Apple headquarters short code for iMessage. iOS hides these from the user. Most providers don’t charge for this, but some do.

Really sucks when you carefully load 10 EUR of credit to buy a 10 EUR prepaid plan for the month and see 0,05 deducted despite being incredibly careful to not do anything that would incur a charge before buying the plan.


Apple DOES say that, when you set up FaceTime and iMessage!

There is a pop up that says "Your carrier may charge for the messages used to activate iMessage and Facetime" You can choose to not activate and do it later.


That warning did not appear in the early days.


That warning actually depends on the “carrier profile”, a configuration file the phone silently fetches (or has cached in firmware builds) based on certain attributes of the SIM like the ICCID or MCC/MNC.

There’s a field in there that configured whether that warning should be shown.


Correct, and it didn't appear for carriers which were whitelisted (who zero-rated the iMessage activation SMS).

My memory, which may be wrong, is telling me that the first major version of iOS which included iMessage did not include the warning at all, and that it was added for non-whitelisted carriers (aka those which did not sell the iPhone) to prepare the user for the possibility that they will be billed, based on user feedback precisely like the comment to which I was replying.


Fun fact: +1 is not a country, but all of North America. For a long time it was entirely possible to dial a perfectly ordinary looking +1 258 xxxxxxx number and get charged up the wazoo because (258) is Antigua and Barbuda, not New Jersey.


Pissed me off how American Airlines wouldn't send me SMS updates to my Canadian number, even though it should cost only slightly more than than a USA number. I guess they got burned sending updates to some caribbean island number in the past.

Ugh.


Surely they can aggregate this across all customers though.

If Twilio cops an unexpectedly high settlement for sending an SMS to +1234567890 in January, can they assume that a separate customer sending an SMS to that number in February will end up in the same boat?

I'd be very surprised if the toll fraudsters weren't using the same numbers to hit multiple Twilio accounts.


Is it banned? Isn't this part of how FreeConferenceCall works with IIRC dial in numbers on a little LEC somewhere in Iowa?


Yes, the rule became effective in 2019: https://www.fcc.gov/document/fcc-adopts-reforms-further-redu...

(FreeConferenceCall and similar companies lobbied heavily against this rule, but AT&T and Verizon were able to lobby it through.)


Yes, but other queries (any aggregate queries that don't join the soft deleted table, any joins to other tables) will now return rows that would have been deleted under hard deletion with cascade.


This is definitely something to watch out for, but in practice (as someone that migrated a system without soft deletes to one that had them) I found that it doesn't tend to come up nearly as much as you might think - usually the table being transitioned to support soft deletes is a relatively core table (since ancillary tables usually aren't worth the complexity to transition) so a lot of your reporting queries will already be pulling in that table. You definitely need to check to make sure you're not missing anything - and sometimes CRUD interfaces will need to be completely revamped to include the table in question - but it's usually not that hard.


You could use a trigger to cascade soft-delete flag toggles, provided all the relevant tables have such a column. Still have to factor that into your other queries, but at least you wouldn't have to make potentially-impossible-to-exhaustively-check joins to figure out if a given row was "deleted".



> Performance only matters if you are in competition with others.

I'm not sure that's right. There's a strong link between performance and repeat usage. Even in the complete absence of competition, Google search being fast could result in more searches per user being conducted and more revenue as a result.


Because this is buried in the post and people don't seem to be grokking it:

> Second, on November 2 we received a report to our security bug bounty program of a vulnerability that would allow an attacker to publish new versions of any npm package using an account without proper authorization.

They correctly authenticated the attacker and checked they were authorised to upload a new version of their own package, but a malicious payload allowed the attacker to then upload a new version of a completely unrelated package that they weren't authorised for. Ouch!


> However, the service that performs underlying updates to the registry data determined which package to publish based on the contents of the uploaded package file

Yeah, this is what's going to keep me up tonight. Yikes.

I can't help but wonder if the root cause was HTTP request smuggling, or if changing package.json was enough.

How do we even mitigate against these types of supply-chain attacks, aside from disabling run-scripts, using lockfiles and carefully auditing the entire dependency tree on every module update?

I'm seriously considering moving to a workflow of installing dependencies in containers or VMs, auditing them there, and then perhaps commiting known safe snapshots of node_modules into my repos (YUCK). Horrible developer experience, but at least it'll help me sleep at night.


How do we even mitigate against these types of supply-chain attacks

Don’t import thousands of modules from third parties just to write a simple web app. If you have 10 stable dependencies it’s no problem to vendor them and vet changes. If you have 10k you’ve entirely given up on any pretence of security.


Recently Node 16 LTS cycle started. One month and a few days before the carry-over, a super controversial package titled `coredeps` [0] was officially declared a core module and has been bundled with all official distributions since.

The NodeJS team refuses to discuss NPM because it's a separate 3rd party. And yet.... this NodeJS Core module comes pre-installed as a global NPM package.

We're just getting started.

This module installs or even reinstalls any supported package manager when you execute a script with a name that would match any that they'd recognise. Opt-in for only a short period, and intending to expand beyond package manager installations.

Amidst all that's been going on, NPM (Nonstop Published Moments) is working on a feature that silently hijacks user commands and installs foreign software. The code found in those compromised packages operated in a similar manner and was labeled a critical severity vulnerability.

The following might actually make you cry.

Of these third party remote distributions it's downloading, the number of checksum, keys, or even build configurations that are being verified is 0.

The game that Microsoft is playing with their recent acquisitions here is quite clear, but there's too much collateral damage.

[0] https://github.com/nodejs/corepack#readme


Not that I agree with the methodology running `corepack enable` introduces, providing OS shims for the specific package manager commands to download them...

corepack (or package manager manager) was transferred to be a Node.js foundation project, voted to be included in release by the Node.js Technical Steering Committee. The one member I'm aware is affiliated with Github/NPM abstained from the vote. The specific utility of corepack is being championed by the package managers not distributed with node so that (Microsofts) `npm` is not the single default choice.

I'm interested to hear what parts of this you see as coming from Microsoft/NPM as I didn't get that vibe? In my view this was more likely reactionary to the Microsoft acquisitions (npm previously being a benign tumour, doctors are now suggesting it may grow :)



I think Corepack is a bad idea and have explicitly added feedback to say so. That said, I know you're misrepresenting the situation (whether intended or not) by suggesting this is a Microsoft initiative (it's not, Microsoft acquired NPM, if anything is even relevant to that acquisition this is meant to distance Node from that initiative).


Whether this is entirely by design I don't know, but Microsoft's positioning in the ecosystem is just brilliant. They're like a force of nature now.

NPM's security issues prime the ecosystem for privacy and security topic marketing (ongoing, check their blog), which is leveraged to increase demand for Github's new cloud-based services.

In the meantime they will just carry on moving parts of NPM to Github until there's so little of the former left, that it'll be hard to justify sticking with it rather than just moving to Github's registry like everyone else.

Eventually NPM gets snuffed-out and people will either be glad it's finally gone, or perhaps not even notice.


To reiterate on what sibling comments said, I'm the one who spawned the discussion and implementation of Corepack, and npm remained largely out of it; the push mostly came from pnpm and Yarn.

Additionally, unlike other approaches, Corepack ensures that package manager versions are pinned per project and you don't need to blindly install newest ones via `npm i -g npm` (which could potentially be hijacked via the type of vulnerability discussed here). It intends to make your projects more secure, not less.


If anything this makes it worse.

- No security checks are present in the package manager download and installation process so there are still no guarantees.

- Existing installations of package managers are automatically overwritten when the user calls their binary. What if this was a custom compilation or other customisations were made?

- This solution does a lot more behind the scenes than just run that yarn command that the user asked for but hand't installed.

- Why not simply notify the user when their package manager isn't installed or only allow it with a forced flag? (As has been suggested uncountable times by numerous people anywhere this topic came up over the years.)

Disrespecting user autonomy, capacity to self-regulate, and ownership over their machine and code is not the way.

Edit: Formatting


People don't directly import thousands of modules. It's actually a lot closer to your "10 stable dependencies". But those dependencies have dependencies that have dependencies. It's a little hard to point the finger at application developers here, IMO.


Some of the comments in this thread are wild. Huge dependency trees are bad pattern, plain and simple.

The problem isn’t only ridiculous amounts of untrusted code, but thousands of new developers of the last 10 years who think this is the way to write reliable code. Never acknowledged the risks of having everyone write your code for you, and overestimate how unique and interesting their apps are.

If you must participate in this madness, static analysis tools exist to scan your 10000 dependencies, taking security seriously is the issue.


> Huge dependency trees are bad pattern, plain and simple.

And what's the alternative? Do you write your own libraries to store and check password hashes complete with hash and salt functions? Roll your own google oauth flow? Your own user session management library?

It's madness on either side, the difference is `npm install` and pray allows you to actually get things done


A large standard library is a big part of the solution. Your project may pull in a crypto library that includes password hashing, and an oauth library, and a session management library, but all of those libraries will have few or no dependencies outside of the standard library.


Every time this discussion comes up about JavaScript ecosystem and the "problems" the solution everyone brings to the table is "Have a large standard library"

You know JavaScript doesn't have one, don't you? That is why this "issue" exists. Putting the cat back in the bag is impossible.


When vetting a dependency consider whether it depends on packages you already depend on, if new dependency tree is too large try breaking down desired functionality—multiple lower level smaller direct dependencies may have lighter overall footprint, take them and some built-ins and some of your own glue code and you get the same thing with fewer holes.

More tangentially, use persistent lockfiles and do periodic upgrades when warranted (e.g. relevant advisories are out) and check new versions getting installed.


You use a trusted standard library which has crypto functions (and lots of other helpers) and for small things you write your own.

Yes you can write your own things like session management, yes that is better than the entire web depending on a module for session management which depends on a module which depends on a module maintained by a bored teenager in Russia.

Please do check out other ecosystems, there is another way.


> And what's the alternative?

Using a small number of libraries, where each library provides a large amount of functionality. When I install Django, for instance, four packages are installed, and each package does a substantial amount of work. I don't have to install 1000 packages where each package is three lines of code.


When I'm writing a C program I can somehow depend on only one library for password hashing and one for oauth (maybe two if it also needs curl). In javascript land it's probably a couple dozen, probably from a couple dozen different people.


It's not two if you need curl. Curl have a large number of dependencies, only difference is that it's visible with npm.


How many developers write C programs versus how many developers write JS apps?

Without accounting for that, your comparison makes no sense! Not even mentioning that you’re comparing two very different level languages. A low level language like C would never behave like a high er level language


> static analysis tools exist to scan your 10000 dependencies

Maybe this is a dumb question but could you please suggest some of these tools that can scan dependencies?


Most of those dependencies have well defined, stable api. They use or at least try to follow semver. And you're probably only hitting about 10% of your dependencies on the critical path you're using, meaning that a lot of potentially vulnerable code is never executed.

I get the supply chain attacks. I get that you have a tree of untrusted javascript code that you're executing in your app, on install, on build and in runtime. But there's also Snyk and Dependabot which issue you alerts when your dependency tree has published CVEs.

We can talk about alert fatigue, but to be honest, I feel more secure with my node_modules folder than I do with my operating system and plethora of DLLs it loads.

I don't wanna turn this into a whataboutism argument, but at some point you gotta get to work, write some code and depend on some libraries other people have written.


> And you're probably only hitting about 10% of your dependencies on the critical path you're using, meaning that a lot of potentially vulnerable code is never executed.

If a dependency has been compromised it doesn't matter if its code is actually used, since it can include a lifecycle script that's executed at install-time, which was apparently the mechanism for the recent ua-parser-js exploit.


> Snyk and Dependabot

Wait, I’m not safe using “npm audit”?


Semver will not save you.


This is the direct result of the culture of tiny dependencies in JS and some other languages, but not all ecosystems are like this. If you choose to use node, this is where you end up, but it was a choice.

Many languages have a decent standard library which covers most of the bases, so it’s possible to have a very restricted set of dependencies.


Unfortunately for frontend and the Node ecosystem, it's too late to try and put the toothpaste back in the tube.

Hopefully Deno helps with this pain point.


I mean, you say that, but the practice of pulling in so many dependencies is fairly recent. It wasn't even possible for most projects before everyone had fast internet.


> It's a little hard to point the finger at application developers here, IMO.

I disagree. Any application developer who seriously thinks that they only have 10 dependencies if they're only importing directly 10 dependencies should not be an application developer in the first place.


You sure about that? Even if you’re writing just for a vetted distribution of an OS, and you write code with zero explicit dependencies, you still have much more than zero dependencies. It’s turtles all the way down. The key is to have an entire ecosystem that you can, to some degree more or less, trust.


No. we've been shouting warnings for years. There have been dozens, if not hundreds of threads on HN alone warning of supply-chain security threats.

At this point if you're not actively auditing your dependencies, and reducing all of them where you can, then you're on the wrong side of history and going down with the Titanic.

The frank truth is that including a dependency is, and always has been, giving a random person from the internet commit privileges to prod. The fact that "everyone else did it" doesn't make it less stupid.


> The frank truth is that including a dependency is, and always has been, giving a random person from the internet commit privileges to prod

I mean, no. This is hyperbole at best and just wrong at median. A system of relative trust has worked very well for a very long time - Linus doesn’t have root access to all our systems, even if we don’t have to read every line of code.


Linus doesn't have root access to our systems for several reasons. One of them is the fact that we get the actual source code, and not just a compiled blob doing "something". Another is the fact that they have at least some level of reviews wrt who can commit code, although this isn't perfect as the case with the University of Minnesota proved.

Npm on the other hand is much, much worse. Anyone can publish anything they want, and they can point to any random source code repository claiming that this is the source. If we look at how often vulnerable packages are discovered in eg. npm, I'd argue that the current level of trust and quality aren't sustainable, partly due to the potentially huge number of direct and transitive dependencies a project may have.

Unless you start to review the actual component you have no way to verify this, and unlike the Linux kernel there is no promise that anyone has ever reviewed the package you download. You can of course add free tools such as the OWASP Dependency Check, but these will typically lag a bit behind as they rely on published vulnerabilities. Other tools such as the Sonatype Nexus platform is more proactive, but can be expensive.


Maybe this is arguing semantics but unless you run something like Gentoo you will most likely get the linux kernel as a binary blob contained in a package your distribution provides. There isn't really any guarantee that this will actually contain untampered linux kernel sources (and in case of something like RHEL it most likely doesn't because of backports) unless you audit it, which most people won't do (and maybe can't do). So, in princpile at least, this isn't really that much better than the node_modules situation. Security and trust are hard issues and piling on 100s of random js dependencies sure doesn't help but you either build everything yourself or you need to trust somebody at some point.


It depends on how you look at it. If I'm running Debian, I have decided to trust their sources and their process, regardless of how their software is being delivered. That process and the implementation of it is the basis for my trust. If I'm really paranoid, I can even attempt to reproduce the builds to verify that nothing has changed between the source and the binary blob.[1]

For npm, trust isn't really a concept. The repository is just a channel used to publish packages, they don't accept any responsibility for anything published, which is fair considering they allow anyone to publish for free. There are no mechanisms in npm that can help you verify the origin of a package and point to a publicly available source code repository or that ensures that the account owner was actually the person who published the package.

Security and trust is very hard, but my point here is that npm does nothing to facilitate either, making it very difficult for the average developer to be aware of any issues. The one tool you get with npm is...not really working the way it was supposed to.[2]

1 - https://reproducible-builds.org/ 2 - https://news.ycombinator.com/item?id=27761334


I 100% agree and I kind of wonder why this doesn't seem to be a problem with similar repositories like maven. That doesn't seem to hit HN every 1-2 weeks with a new security flaw/compromised package so they seem to be doing something right, whatever that may be.


It's likely to be a combination of several things. Npm is trendy and has a low threshold for getting started, plus the fact that adding eg. bitcoin miners to a website is a nice way to decentralize and ramp up mining capacity.

Maven on the other hand define several requirements, such as all files in a package being signed, more metadata and they also provide free tools the developers can use to improve the quality of a package.


You do need to trust somebody (such as your Linux distribution of choice) but with NPM you're trusting thousands of somebodies and your system's security depends directly on all of them being secure and trustworthy.


Yeah, that is true. And npm as a whole doesn't really have a good track record in being worthy of a lot of trust.


Linux has all sorts of controls and review policies that NPM doesn't have. It's a false equivalence to say "we trust Linux, so therefore trusting NPM is OK".

If <random maintainer> commits code to their repo, pushes it to npm, and you pull that in to your project (possibly as an indirect dependency), what controls are in place to ensure that that code is not malicious? As far as I can tell, there are none. So how is this not trusting that <random maintainer> with commit-to-prod privileges?


Yeah, this is what I meant, except it goes in all directions. It’s not stating a “false equivalence” because pointing out that you can draw a line between 0 and 100 isn’t stating an equivalence.

Different risk profiles exist. There’s a difference between installing whatever from wherever, installing a relatively well known project but with only one or two Actually Trusted maintainers, and installing a high profile well maintained project with corporate backing.

This is true in Linux land, and it’s true in npm land. You can’t just add whatever repo and apt get to your hearts content. Or, you know, you also can, depending on your tolerance for risk.


I agree with what you're saying, but I don't see any discussion of risk in any conversation about JS programming (and I'm only picking on JS because of the OP - Ruby and Python aren't any better, and even Rust is heading the same way).

For example (taking one of the top results for "javascript dependency management" at random): https://webdesign.tutsplus.com/tutorials/a-guide-to-dependen... talks about all the dependency management methods available. The word "risk" is not in that article. There is no paragraph saying "be aware that none of these package managers audit any of the packages they serve, and you are at risk of supply-chain attack if you import a dependency using any of them".

This doesn't get any better as you get more expert. I've had conversations with JS devs who've been professionally coding for years, and none of them are aware of it (or if they are, treat it as a serious threat). You can see the same in the comments here.

If there's not even any discussion of risk, and no efforts to manage it, then it's not really a relevant factor. No-one is considering the risk of importing dependencies, so the 0-100 scale is permanently stuck on 100.


And should we all start rolling our own crypto now to avoid dependencies? In most cases a stable library is going to be much more secure than a custom implementation of `x`. Everything has trade-offs. What's stupid is dogma.


I know you're being hyperbolic and I also want to add that for crypto you should just use libsodium. The algos and the code are very good. And lots of very smart folk have given it a lot of review. And its API is very nice.


When you say this, do you mean actual C libsodium? Because surely you don’t mean that I, a js developer, should need to figure out how to wrap this .h file thingy to get it to work in js when there’s SIX third-party libsodium implementations/wrappers/projects sitting right there listed on the libsodium website? /s


This. And npm isn't the only instance. (Appreciate the voice in the wilderness Marcus.)


Or perhaps, the sky is not falling.


> Even if you’re writing just for a vetted distribution of an OS, and you write code with zero explicit dependencies, you still have much more than zero dependencies.

Sure, the entire OS is a dependency. Nothing I said contradicts that. And yes, every application developer should be aware of what they are depending on when they write software for a particular OS.

> The key is to have an entire ecosystem that you can, to some degree more or less, trust.

You don't necessarily need to trust an entire ecosystem, but yes, every dependency you have is a matter of trust on your part; you are trusting the dependency to work the way you need it to work and not to introduce vulnerabilities that you aren't aware of and can't deal with. Which is why you need to be explicitly aware of every dependency you have, not just the ones you directly import.


I am actually not sure if this is possible, while also accepting security updates etc from my OS distributor? How do you literally personally vet every line of code that gets run directly AND indirectly by your application, and still have time to write an application?

I’m okay with saying, “I trust RHEL to be roughly ok, just understand the model and how to use it, and keep my ear to the ground for the experts in case something comes up.”

At the level of npm, I feel roughly the same about React. I don’t trust it quite as much, but I’m also not going to read every code change. I’ll read a CHANGELOG, sure, and spelunk through the code from time to time, but that’s not really the same. I’ll probably check out their direct dependencies the first time, but that’s it.

I actually don’t know how you could call yourself an application developer in most ecosystems and know every single dependency you actually have all the way down, soup to nuts. Heck, there are dependencies that I accept so that my code will run on machines that I have no special knowledge of, not just my own familiar architecture. I accept them because I want to work on the details of my application and have it be useful on more than just my own machine.

Edit for clarity: I agree with almost everything you’re suggesting as sensible. Just not with your conclusion: that you’re not a “real” application developer if you don’t know all of your dependencies


> I am actually not sure if this is possible, while also accepting security updates etc from my OS distributor?

Accepting the OS as a dependency includes the security updates from the OS, sure.

> How do you literally personally vet every line of code

Ah, I see, you think "understanding the dependency" requires vetting every line of code. That's not what I meant. What I meant is, if you use library A, and library A depends on libraries B, C, and D, and those libraries in turn depend on libraries E, F, G, H, I, etc. etc., then you don't just need to be aware that you depend on library A, because that's the only one you're directly importing. You need to be aware of all the dependencies, all the way down. You might not personally vet every line of code in every one of them, but you need to be aware that you're using them and you need to be aware of how trustworthy they are, so you can judge whether it's really worth having them and exposing your application to the risks of using them.

> I’ll probably check out their direct dependencies the first time, but that’s it.

So if they introduce a new dependency, you don't care? You should. That's the kind of thing I'm talking about. Again, you might not go and vet every line of code in the new dependency, but you need to be aware that it's there and how risky it is.

> I actually don’t know how you could call yourself an application developer in most ecosystems and know every single dependency you actually have all the way down, soup to nuts.

If you're developing using open source code, information about what dependencies a given library has is easily discoverable. If you're developing for a proprietary system, things might be different.


I really appreciate your stance, but just have to disagree. If it’s core React, I don’t check beyond what curiosity mandates. If it’s a smaller project with less eyes on it, yes absolutely I’ll work through the dependency chain. But that can also get pretty context dependent, based on where the code is deployed.

But I don’t know how you can make such a strong distinction between “a committed line of code” vs “a dependency”, because the only thing differentiating them is the relative strength of earned trust regarding commits to “stdlib,” commits to “core,” commits to “community adopted,” etc.

It’s too much. There’s a long road of grey between “manually checks every line running on all possible systems where code runs and verifies code against compiled binary” and “just run npm install and yer done!”


I only imported 10 dependencies, but those 10 dependencies each had 10 dependencies which each had 10 dependencies which each had 10 dependencies and all of the sudden I'm at 10k dependencies again...


The transitive dependency chain should be part of your evaluation of a library. Frameworks are special cases, for sure. But if you’re adding a dependency and it adds 10,000 new entries to your lock file, that should be taken into consideration during your library selection process. Likewise, when upgrading dependencies, you should watch how much of the world gets pulled in.

That said, I don’t know what the answer is for JS. There are too many dependency cycles that make auditing upgrades intractable. If you’re not constantly upgrading libraries, you’ll be unable to add a new one because it probably relies on a newer version of something you already had. In most other ecosystems, upgrading can be a more deliberate activity. I tried to audit NPM module upgrades and it’s next to impossible if using something like Create React App. The last time I tried Create React App, yarn-audit reported ~5,000 security issues on a freshly created app. Many were duplicates due with the same module being depended on multiple times, but it’s still problematic.


That's going to be incompatible with writing interesting software on the web, unless we want to just hand the problem over to a handful of big players who can afford to hand-vet 10,000 dependencies.

The reason packages are so big is the complexity for an interesting app is irreducible. People don't import thousands of modules for fun; they do it because simple software tends towards requiring complex underpinning. Consider the amount of operating system that underlies a simple "Hello, world!" GUI app. And since the browser-provided abstractions are trash for writing a web app, people swap them out with frameworks.

I'm working on a React app right now where I've imported about a dozen dependencies explicitly (half of which are TypeScript @type files, so closer to a half-dozen). The total size of my `node_modules` directory is closer to a couple hundred packages. It's 35MB of files. And no, I couldn't really leave any of them out to do the thing I want to do, unfortunately.


People oftentimes do this, with suspicious reasoning. Classic examples:

1) "We have is-array as a dependency" Why? Well, pre Array.isArray, there wasn't anything built-in. Why not just write a little utility function which does what is-array does? See #3

2) "We have both joi and io-ts. Don't they do roughly the same thing?" They do; io object validation. New code uses io-ts, but a bunch of old code relies on joi. Should we update it? Eh we'll get around to it (we never do).

3) "is-array is ten lines of code. why don't we just copy-paste it?" Multiple arguments against this, most bad. Maybe the license doesn't support it. More usually; fear that something will change and you'll have to maintain the code you've pasted without the skills to do so. Better to outsource it (then, naturally, discount the cost of outsourcing).

4) "JSON.parse is built-in, but we want to use YAML for this". So, you use YAML. And need a dependency. Just use JSON! This is all-over, not just in serialization, but in UI especially; the cost analysis between building some UI component (reasonably understood cost) versus finding a library for it (poorly understood cost, always underestimated).

Not all dependency usage is irreducible. Most is. But some of it is born, fundamentally, out of a cost discount on dependency maintenance and a corporate deprioritization of security (in action; usually not in words).


The counterpoint is all the security issues generated when dev teams re-implement the already-well-implemented. Your points are valid, but as with anything, it is not cut and dry.


If your software is ultimately dependent on thousands of other modules from various developers all over the Internet, you have no idea whether what you're depending on is actually well implemented or not.


Didn't you just describe most Linux distributions?


No. First, Linux is an entire operating system, not a single application. Second, when people pull software from their Linux distribution that ultimately comes from developers all over the Internet, they do it to use the software themselves, not to develop applications that others are going to have to deal with. Third, Linux distributions put an extra layer of vetting in between their upstream developers and their users. And for a fourth if we need it, I am not aware of any major Linux distribution that has pulled anything like the bonehead mistakes that were admitted to in this article.


> No. First, Linux is an entire operating system, not a single application.

Sorry, to clarify: when I say "Linux distro" here, I mean the distribution package sets, like Debian or Ubuntu.

> Second, when people pull software from their Linux distribution that ultimately comes from developers all over the Internet, they do it to use the software themselves, not to develop applications that others are going to have to deal with.

The distros are chock full of intermediary code libraries that people use all the time to build novel applications depending on those libraries, which they then distribute via the distro package managers. I'm not quite sure what you mean here... I've never downloaded libfftw3-bin for its own sake; 100% of the time I've done that because someone developed an application using it that I now have to deal with.

Conversely, I've also used NodeJS and npm to build applications I intend to use myself. It's a great framework for making a standalone localhost-only server that talks to a Chrome plugin to augment the behavior of some site (like synchronizing between GitHub and a local code repo by allowing me to kick off a push or PR from both the command line and the browser with the same service).

> Third, Linux distributions put an extra layer of vetting in between their upstream developers and their users.

This is a good point. It's a centralization where npm tries to solve this problem via a distributed solution, but I'm personally leaning in the direction that the solution the distros use is the right way to go.


When I'm writing desktop software, I don't have to worry about whether yaml adds a dependency that I can't afford to maintain.

People who develop web apps want that level of convenience. And if we can't solve the security problem in a distributed fashion, web development will end up owned by big players who can pay the money to solve the problem in a centralized fashion.


> When I'm writing desktop software, I don't have to worry about whether yaml adds a dependency that I can't afford to maintain.

Why not? Because some big, centralized player has put the time, effort, and money into making yaml part of a complete library that gives you everything you need to write desktop software. Nobody writes desktop software by importing thousands of tiny libraries from all over the Internet.


I agree. As I said at the top of this thread,

> That's going to be incompatible with writing interesting software on the web, unless we want to just hand the problem over to a handful of big players who can afford to hand-vet 10,000 dependencies.

Consolidating into a distro-management-style solution would be one option.


> why don't we just copy-paste it? ... Maybe license doesn't support it.

You did say the argument was bad, but a license that prevents you from making a copy manually but allows you to make a copy though the package manager isn't a thing, is it? In either case the output of your build process is a derived work that needs to comply with the license.

Unless, perhaps, you have a LGPL dependency that you include by dynamic linking (or the equivalent in JS – inclusion as a separate script rather than bundling it?) in a non-GPL application and make sure the end user is given the opportunity to replace with their own version as required by the license.


> The reason packages are so big is the complexity for an interesting app is irreducible

These kinds of claims demand data, not just bare assertions of their truthiness.

Firefox, as an app with an Electron-style architecture (before Electron even existed), was doing some pretty interesting stuff circa 2011 (including stuff that it can't do now, like give you a menu item and a toolbar button that takes you to a page's RSS feed), with a bunch of its application logic embodied in something like well under <250k LOC of JS.

The last time I measured it, a Hello World created by following create-react-app's README required about half a _gigabyte_ of disk space between just before the first `npm install` and "done".

That NPM programmers don't know _how_ to write code without the kind of complexity that we see today is one matter. The claim that the complexity is irreducible is an entirely different matter.


Firefox's 250k LOC are riding on the millions of lines of code of the underlying operating system and GUI | TCP | audio toolkits that it used. To compare it to npm development, you would need to factor in the total footprint of every package that you had to install to compile Firefox in 2011.

... And I think it's an interesting question to ask why we can trust the security of, say, Debian packages and not npm, given how many packages I have to pull down to compile Firefox that I haven't personally vetted.


> Firefox's 250k LOC are riding on the millions of lines of code of the underlying operating system and GUI | TCP | audio toolkits that it used.

Right, just like every other Electron-style app that exists. The comparison I made was a fair one.

> To compare it to npm development, you would need to factor in the total footprint of every package that you had to install to compile Firefox in 2011.

No, you wouldn't. That's a completely off-the-wall comparison.

How many lines of application code (business logic written in JS including transitive NPM dependencies before minification) go into a typical Electron app in 2021? Into a medium sized web app? Is the heft-to-strength ratio (smaller is better) less than that of Firefox 4, about the same, or ⋙?


After I compile my Rust or C app (and pull all attendant libraries to make that possible, spread all over my system) I’ve downloaded about 500MB of code. The resultant binary is 10MB.

If I do the same thing with my JS app, I still download a bunch of libraries, but puts them all in node_modules. That’s also about 500MB. The resulting compiled/built code is around 2MB.

I dunno, seems roughly the same.


It sounds like you're using the React Hello World example to respond to the comparison to Firefox. They're separate points which stand on their own.

With respect to the package size issue, the 500MB-to-2MB observation does not bode well for the claim of irreducibility.


> The reason packages are so big is the complexity for an interesting app is irreducible.

This is absolutely, demonstrably false. Can you really claim that you use 100% of the features provided by all of the dependencies you pull in? If not, you are introducing unnecessary complexity to your code.

That doesn't mean that this is necessarily a bad thing, or that we should never ever introduce incidental complexity—we'd never get anything done if that was the case. My point is simply that there exists a spectrum that goes from "write everything from scratch" on one end all the way to "always use third-party code wherever possible" on the other. It's up to you to make the tradeoff of which libraries are worth pulling in for a given project, but when you use third-party code, you inevitably introduce some amount of complexity that has nothing to do with your app and doesn't need to be there.


I don't use 100% of the features I pull in. But I also don't use 100% of the features of libc or gtk if I'm building a GUI app in C.

I have 35 MB of node_modules, but after webpack walks the module hierarchy and tree-shakes out all module exports that aren't reachable, I'm left with a couple hundred kilobytes of code in the final product.


> But I also don't use 100% of the features of libc or gtk if I'm building a GUI app in C.

That’s exactly my point. This is a tradeoff that’s inherent to software development and has nothing to do with the web or Node or NPM. You could just as well decide to write your desktop app with a much smaller GUI library, or even write your own minimal one, if the tradeoff is worth it to reduce complexity. (Example: you’re writing an app for an embedded device with very limited resources that won’t be able to handle GTK.)


> browser-provided abstractions are trash for writing a web app

This is the key.

If browsers would improve here we wouldn't need half of the dependencies that we use now. It took nearly a decade to get from moment.js to some proper usable native functions for example.

Besides that we _really_ need to solve the issue of outdated browsers. Because even when those native APIs exist we'll need fallbacks and polyfills and lots of devs will opt for a non-standard option (for various reasons).

The web is still a document platform with some interactivity bolted on top, I love it but it's a fucking mess.


Without more information this mindset is stuck where the web platform was maybe a decade or more ago. Roughly a dog or cat lifetime. Consider the list APIs at https://developer.mozilla.org/en-US/docs/Web/API I'd be curious to know if anyone active on HN could actually say they have proficiency with the entire list. Professionally speaking I wouldn't call that a mess. I'd call it a largely unused and unexplored opportunity.


Somehow people managed to develop useful software before NPM and node and so on, without having thousands of very small dependencies. Maybe it's because the stuff built in to Javascript is nearly useless? And the older languages had a standard library that included most of the useful stuff you'd need to build something?


Ruby, Python, Go, Rust, etc all have this exact same problem; it's not unique to NPM.

JS has a culture of using lots of small, composable modules that do one thing well rather than large, monolithic frameworks, but that's only an aggravating factor; it's not the root of the problem.


The root problem is no stdlib and a language design riddled with edge case foot guns that are easy to miss in what should be trivial code.


Again, that's only an aggravating factor, not the root cause. Supply chain attacks can happen in literally any language that has a package manager.

Here's a similar issue that occurred with Python's PIP just this year: https://portswigger.net/daily-swig/dependency-confusion-atta...


They do not, they have capable and trusted standard libraries and it’s quite possible to build a web app in those other languages without any external dependencies whatsoever.

JS and its culture of small dependencies that do one thing but import 100 other things to do that thing is the root of the problem here.


The GNU software ecosystem can be described as "culture of small dependencies that do one thing but import 100 other things to do that thing..." Installing, say, GIMP for the first time using `apt-get install` pulls in about 50 packages and many, many megabytes in total.

So the issue is probably something other than using bazaar-style code design. I think as other people in the thread have noted, the distros have centralized, managed, and curated package libraries that get periodically version "check-pointed" and this is not how npm works.

I may have my answer to the original thought I floated: the way this problem has been solved successfully is to centralize responsibility for oversight instead of distributing it.


> that do one thing well

And sometimes even something the language already does, but the author didn’t know.


Part of that was that we didn't make major changes to how we did things every other project back then. If we needed to do X and that wasn't built in to the language or standard library we were using we would either write our own X library or we could take the time to carefully evaluate the available third party X libraries and pick a high quality one to use. We could justify spending the time on that because we knew we'd be taking care of not just our immediate X needs but also the X needs for our next few years worth of projects.


BTW, you can build a lot of interesting things with jQuery alone.


That's going to be incompatible with writing interesting software on the web

Lots of people are writing interesting web software without these problems - the website you’re currently posting on is one example. So I completely disagree with this statement and think you need to examine your assumptions.

There is life outside npm.


"Interesting" was a bad choice for specificity here on my part. By the definition I mean, HN isn't interesting... It's got interesting content, but the UI is a dirt-simple server-side-generated web form.

OpenStreetMap is "interesting." Docs and Sheets are "interesting." Autodesk Fusion 360 is "interesting." Facebook is "interesting." Cloud service monitoring graph builders are "interesting." The Scratch in-browser graphical coding tool is "interesting." Sites that are pushing the edge of what the browser technology is capable of are "interesting."


None of the sites you mention above would require npm to build.

At some stage after you've seen enough 'interesting' dependencies changing the world around your app as you write it you'll realise that boring is good for most of the tech you depend on - the more boring the better, and the fewer dependencies the better.


You might be surprised how small a team it took to produce microsoft office 2000 (last good version), or windows nt kernel, or WhatsApp.

One need not be a big player to write good code without 10000 dependencies


I have to think there's a lot of YAGNI going on, dependencies that are included to be a better version of native functionality. A faster JSON parser, say, with I dunno, 20 dependencies (a count which may further extend within those deps) for something where slow JSON parsing has not yet become an issue. I think there's a lot of "academic" inclusions out there like this.


My experience working on tens of front end projects is the complete opposite. Nobody is adding dependencies just for the fun of it, or because you might need it in a year. You add a dependency because you need some functionality and there is no time/budget to re-do it in house - not to mention that if it's a well-supported library with, for example, hundreds of thousands of users, it's unlikely you could even make it better.


> there is no time/budget to re-do it in house

What are the actual time cost savings when you take the total costs into consideration?[1][2] What would it look like if you didn't implement an app by stringing together dozens/hundreds/thousands of third-party modules implemented bottom-up, but instead took control of the whole thing top-down?[3]

1. https://jvns.ca/blog/2021/11/15/esbuild-vue/

2. https://news.ycombinator.com/item?id=24495646

3. https://www.teamten.com/lawrence/programming/write-code-top-...


I agree that using node to write browser client code requires more configuration of the compilation environment than I would like (especially since I have to configure both node and some kind of packer to convert all of my es6 module dependencies into one flat pack JavaScript file).

That's a small up-front one-time cost relative to writing Redux from scratch. And before anyone asks... Yes, our use case is complex enough to justify a local state storage solution based on immutable state curated via actions and reducers. Just as our rendering use case is complex enough to justify React.


Then you are shit out of luck and vulnerable to supply-chain attacks. Good luck with that.


Well, that's what I'm wondering. GNU/Linux distros like Debian and Ubuntu don't seem to suffer supply chain attacks, but it's not entirely clear to me why. Is it because the distros are more carefully curated, and the infrastructure for extending them older so it has had more time to wrestle security concerns to the ground?

Or is it, disquietingly, the possibility that they are completely vulnerable to this sort of attack and either nobody has noticed there compromised or attackers haven't decided that compromising a major desktop Linux distro is worth the time?

https://www.zdnet.com/article/open-source-software-how-many-...


Distributions like Debian are _highly_ aware of supply chain attacks. That's one of the key reasons for projects like Reproducible Builds [0] and rekor [1] existing.

So yes, distributions are carefully curated, with a large team of experts vetting the system in a huge number of ways, and are always looking to improve upon them. Because attackers are actively attempting to compromise major distributions.

[0] https://wiki.debian.org/ReproducibleBuilds

[1] https://lwn.net/Articles/859965/


Unfortunately most modern JavaScript tooling has made this very difficult. Before you even have a "hello world" app running create-react-app et al. will install literally a thousand random packages. It's already over.


Maybe 10 stable dependencies without dependencies? Otherwise it's dependencies all the way down.

Is vendoring in a dependency just slowing things down? Slows down development and bakes existing attacks in longer.


What’s the alternative? Writing everything in house? I think a better solution would be a better dependency installer/resolver that is as secure as possible.


> What’s the alternative?

Don't use the popular hype garbage. Yes, I realize that may not be an option for a lot of people professionally. But I believe if you actually spend some time on due diligence for any dependency you consider adding, you can significantly reduce the number of untrusted deps you pull in.

One of the problems of course is that javascript exacerbates this problem somewhat by not having a comprehensive standard library. But whenever I look for go libraries, go.sum is usually one of the first files I click to check how much garbage it pulls in.


Standard library is a dependency too and can have bugs in it. What's better - having stdlib tied to the runtime release schedule or having a lot of micro libraries on their own rolling release schedule which can quickly release security patches?

I agree, having those dependencies authored by Node.js Foundation itself will yield higher level of trust. But we're all human, and one can argue earnest open source developers have better aligned incentives than a randomly selected Node.js Foundation employee.

I honestly am not sure I fully agree with what I've just written above either. But one thing I would want to pinpoint: those things are NOT black and white. The specific set of trade offs the Node.js ecosystem fallen into might look accidental and inadequate. But I think it's fairly reasonable.


Yes using a standard library is better. It is more stable, trustworthy and maintained by a small group of people.


I would be with you, but leftpad was a thing. Anyone importing leftpad (or any of the similar 5-line dependencies) has no leg to stand on here.

Yes, you should write leftpad in-house. Anything that is a copy-paste Stack Overflow answer should not be dependency.


You’re not wrong, I’ll admit, but if we judge everything by the most extreme examples we’d still be writing assembly and only mathematicians would be programmers. I’m sure there’s a universe where that’s the case, and I’m sure there’s a percentage of people here who wish that were the case, but I’d say the world is better off with separation of skill sets and I’d rather leave the writing of libraries to people who enjoy writing them and can do it well.


How about we just go back to writing all the trivial stuff in house?

Nobody is suggesting we each write our own charting library, but we should each be capable of writing that function that picks a random integer between 10 and 15. Because the npm version of that function will have the four thousand dependencies that everybody likes to mock whenever npm is discussed.

Other People’s Javascript is generally pretty terrible. My policy is to only use it when absolutely necessary.


Frameworks and library authors could stand to do more in-house. It's also on devs to vet a library for maintenance concerns like sprawling dependencies.


Or a very large dep like apache commons in java that you can trust rather than one dependency for zip compression, one dependency for padding, one dependency for http error codes and so on ?


That's essentially what a Linux distribution is.


How do you police what your imports import? Serious question. Let's say I'm building a Discord app (as I want to do.) Well, either NPM or Python PIP to get one module - the discord module. But who knows how safe what it imports is. That's the point.

Are there stable dependencies from reputable companies that do the things I want without me vetting 10k submodule imports?


It may require picking a different language with a different culture. JS badly needs a more capable standard library.


That's the crux of the matter. Server-side you can, and should, choose a different platform than Node.js but for the browser we're all stuck with JS. A more capable standard library, where vetting everything would be much more feasible, would do much to improve the situation.


I somewhat naively assume that at least if I use plain React or Angular then

- someone at Facebook or Google has vetted the dependcy graph for those

- I also assume they have internal Snyk-like tools

- I also assume other users have similar tools

so someone should catch it.

When it comes to anything else I often look into what it pulls in.

Also I keep an eye on the yarn.lock-file in pull requests.


> so someone should catch it.

Just a week or two ago, a malicious NPM package was published which, for the hour or so that it was up, would be pulled in by any installation of create-react-app, since somewhere in the dependency tree it was specified with “^” to allow for minor updates.

Any machine that ran “npm -i” with CRA or who knows how many other projects during that hour may have compromised credentials.

1 hour to find and unpublish the malicious package is a fast turnaround time, so someone was watching and that’s great. But any NPM tree that includes anything other than fully-specified and locked versions all the way down the tree is just waiting for the next shoe to drop.


So my specific usecase (write a Discord bot) has the solution of "write everything from scratch" or "don't use JS"?

That's kinda what I assumed, but "only run code that have been signed off on by a major company" is kinda a shitty solution.


This requires that you're pulling in only exactly the same versions of those dependencies as those that Facebook and Google have vetted. Is there a way to do that?


A combination of things, I think.

1. Running those builds in VMs is a good idea.

2. Monitoring for weird behavior.

3. Restricting build scripts from touching anything outside of the build directory.

4. Pressuring organizations like npm to step up their security game.

It would be really nice if package repositories:

1. Produced a signed audit log

2. Supported signing keys for said audit log

3. Supported strong 2FA methods

4. Created tooling that didn't run build scripts with full system access

etc etc etc

I started working on a crates.io mirror and a `cargo sandbox [build|check|etc]` command that would allow crates to specify a permissions manifest for their build scripts, store the policy in a lockfile, and then warn you if a locked policy increased in scope. I'm too busy to finish it but it isn't very hard to do.


Thanks. I was thinking of a CI step that checked the SHA-256 of yarn.lock against a "last known good" value committed by an authorized committer and enforced by a branch policy.

Signed audit logs seem like a good idea.

Now...how to get developers to avoid using NPM and Yarn altogether on sensitive projects...


>How do we even mitigate against these types of supply-chain attacks

I know HN is usually skeptical of anything cryptocurrency/blockchain related, and I am too. But as weird as it sounds, I think blockchain might actually be the solution here.

The problem with dependency auditing is it's a lot of work. And it's also duplicate work. What you'd really like to know is whether the dependency you're considering has already been audited by someone you can trust.

Ideally someone with skin in the game. Someone who stands to lose something if their audit is incorrect.

Imagine a DeFi app that lets people buy and sell insurance for any commit hash of any open source library. The insurance pays out if a vulnerability in that commit hash is found.

* As a library user, you want to buy insurance for every library you use. If you experience a security breach, the money you get from the insurance will help you deal with the aftermath.

* As an independent hacker, you can make passive income by auditing libraries and selling insurance for the ones that seem solid. If you identify a security flaw, buy up insurance for that library, then publicize the flaw for a big payday.

* A distributed, anonymous marketplace is actually valuable here, because it encourages "insider trading" on the part of people who work for offensive cybersecurity orgs. Suppose Jane Hacker is working with a criminal org that's successfully penetrated a particular library. Suppose Jane wants to leave her life of crime behind. All she has to do is buy up insurance for the library that was penetrated and then anonymously disclose the vulnerability.

* Even if you never trade on the insurance marketplace yourself, you can get a general idea of how risky a library is by checking how much its insurance costs. (Insurance might be subject to price manipulation by offensive cybersecurity orgs, but independent hackers would be incentivized to identify and correct such price manipulation.)

The fact that there is actual value here should give the creator a huge advantage over other "Web 3.0" crypto junk.


This is a pretty clever application of DeFi, thanks. DeSec? Can't help but wonder if there still would be incentive for lone wolves to slip backdoors and vulnerabilities into libraries though[0].

[0]: https://portswigger.net/daily-swig/smuggling-hidden-backdoor...


> I can't help but wonder if the root cause was HTTP request smuggling, or if changing package.json was enough.

Maybe I'm just incredibly cynical from my experiences with the intersection of the JS ecosystem and security, but...

...I'd bet dimes to dollars it's the latter (just changing the package.json). My guess is they authenticate but don't actually scope the authentication properly, and no one noticed because no one thought to look.

Of course, as we've seen in the past decade, there's so much inertia behind the JavaScript ecosystem that none of this is going to fundamentally change. It'll just take another decade or so for the ecosystem to reinvent all of the wheels and catch up to the rest of the space.

And at that point it will probably be considered stuffy and "enterprise" and the new hotness unburdened from such concerns will repeat the cycle again.


> to reinvent all of the wheels and catch up to the rest of the space.

Which of the public package systems are the state of the art that should be replicated?


The 'wheels' might simply be having a standard library and less number of packages instead of micropackage mess.

For example, look at django, it provides more functionality (though not directly comparable to) than react. But installation is quick and there are small number of packages from trusted authors.

The ecosystem is orthogonal to how good package manager is.


Java’s works really well.

I think it makes other package managers look like a toy.


I assume you are referring to Apache Maven tooling (and compatible) and the pom repos, like Sonatype's Central Repo.

PGP package signing is a huge plus. Is that a requirement for publishing?

How many different repo's do you typically have to deal with in the average project?

Would Sonatype react quickly to malware issue's like this in the repository? Have there been examples of similar package hijacking?


It’s a requirement for the central repo if I recall.

And the best past is the signature handling is a part of Java, not the package manager, so nothing needs to be re-invented. The default class loader checks the signatures at runtime as well.

Typically you need 1-2 repositories, but often just 1. But if you’re an organization, you can set up your own repository very easily and use it to store private deps and to cache deps (which also allows you to lock binaries and work offline). Repo mirroring is super easy to set up. If you have an internal repo, you can just have your internal project use your own repo and your computer never has to directly reach outside the Internet for a package.

Unlike other languages, the “central repo” and the package manager tooling are independent and package resolution is distributed. When you start a project, you choose your repos. I don’t know how quickly Sonatype would react personally but they are only default by de facto. Many packages are published on several repos and mirroring is a default feature of a lot of repo software. If Sonatype started screwing up, everyone could abandon them instantly, which forces them to be better.


I'm seriously considering moving to a workflow of installing dependencies in containers or VMs, auditing them there, and then perhaps commiting known safe snapshots of node_modules into my repos (YUCK). Horrible developer experience, but at least it'll help me sleep at night.

I have had people tell me in discussions online, also entirely seriously, that running a package manager to install a dependency while developing is inherently dangerous and anyone who does it outside of a disposable sandboxed VM deserves everything they get. If the packages are inexplicably allowed to do arbitrary things with privileged access to the local system without warning at installation time then clearly the first part is correct, but victim-blaming hardly seems like a useful reaction to that danger.


>How do we even mitigate against these types of supply-chain attacks, aside from disabling run-scripts, using lockfiles and carefully auditing the entire dependency tree on every module update?

Don't trust the package distribution system - use public key crypto.


Public key crypto doesn't help much if your private keys get stolen, which was essentially what happened with some of the recent hacked packages and which is why they're now starting to enforce 2FA.


The longer term solution to this is public key signatures with an ephemeral key, rooted to some trusted identity source (e.g., a GitHub account with strong 2FA). There’s lots of work on that front coming out of the Open Source Security Foundation.


are you really using private keys without a passphrase in 2021?


It s very easy: add a dev signature in the repo that cannot be changed ever, and force the devs to sign their stuff before allowing a change of binary or a download.

Like that you can have anything trying to upload but fail the signature check.


This assumes that the developers themselves are not malicious (see: left-pad) and that their signing keys can't be stolen.


Also: "This vulnerability existed in the npm registry beyond the timeframe for which we have telemetry to determine whether it has ever been exploited maliciously."


Having different services trust different (and unrelated) bits of the request is an immortal classic though, great stuff.


The part that made sure the user could update the package could have at least check if the payload is about that package before passing it to the service that trusted it.


That one, combined with the other “ability to read names of private packages, makes for the possibility of a really really sneaky attack. I wonder how many orgs treat their private npm packages with significantly less scrutiny than the public ones they rely on?


No CVE mentioned. Hard to grok to me, could someone educate me why this is missing in the blog post?


services don't get CVEs.


Well, didn't we just experience two major npm published packages containing malware? Both had CVEs.

Now we have the probable root cause, buried in a wall of text. No CVE.


CVEs alert end users that they need to take action to apply updates. That's relevant when a specific npm package contained a known vulnerability. It's not relevant when the npm server contained a known vulnerability. There's nothing a user of npm can do to update the npm server.

CVEs don't just mean "this is a big security problem".


hehe...

CVE: "the entire javascript/ruby/python development model is insecure"

affected: "the whole damn internet"

resolution:"rewrite the last 10 years of internet developmet from scratch"

not sure that's gonna happen


At least the npm packages outside their telemetry horizon should be updated immediately.


Yes, because pure services don't get CVEs. CVEs are for distributed software.


Isn't this the biggest security flaw in the package ecosystem ever?

They don't even know when, if, who and when this was exploited, but maybe I didn't pay enough detail attention to the few paragraphs devoted to the real problem.

So shoudn't we assume all NPM packages published prior to 2nd of November are compromised?

And if so, shouldn't this deserve a CVE? (https://en.wikipedia.org/wiki/Common_Vulnerabilities_and_Exp...)


CVEs aren't usually assigned for "there might be something wrong", but only identified specific issues.


I used to pack shelves for a supermarket and I believe at the time (~15 years ago) we had a two hour window in which perishables could be out of the fridge. Usually we'd wheel out pallets from the stockroom fridge or freezer onto the floor, move all the boxes off the pallet into their rough locations on the floor, and then pack each box into the fridges sequentially (effectively batching up the sorting/locating/carrying work). If stock sat on the floor without getting into the fridge for more than two hours, or in the rare occasion there was a power outage and the fridges or freezers were out for more than two hours, stock had to be written off.

All that to say the allowed timeline might be longer than you expect (or desire)!


It shouldn't be a question of cost of living. Companies pay significantly more (or less) in local markets because of supply and demand in those local markets.

Historically those markets have been localised due to the fact that few companies employed full time remote team members, and relocating countries is intentionally made difficult and expensive.

If we see a sustained transition where a significant percentage of companies (even in just particular industries) support full time remote work, we should expect to see supply and demand for talent in that market to become less localised (time zones are still a thing, and full time remote is different to full time remote _and_ distributed).

If that plays out, you'd expect locations with historically lower compensation to get a bump (regardless of cost of living), and locations with historical higher compensation to get a reduction (again, regardless of cost of living).

Companies don't care what your expenses are per se. The valley is an intensely competitive hiring market which forces up compensation, which in turn forces up cost of living - housing supply is very finite and many people living there have lots of disposable income.


Long term Stripe customer across multiple companies here.

I'm ok with paying for services like this that provide loads of value. I expect there's increasing diversity in terms of Stripe's customer base and how they use the product, and trying to pick a single percentage price point that works across all of them is no longer feasible.

That said, I'm quite unhappy with Stripe's pricing as an AU customer. We're paying exorbitant rates to convert USD to AUD (think retail bank rates despite transacting multiple millions a year, ~3x what we'd pay Transferwise). There's no option to settle in USD, and a lot of our expenses are in USD, so we then pay another currency conversion fee when we spend.

It's problematic to the extent that we're considering whether the ongoing costs and hassle of setting up a US entity, dealing with international tax, compliance, parent companies etc. would be worthwhile.


This is infuriating. I've been told so many times that it's 'coming'. 5% of top line revenue is pretty significant and now an added 0.5% feels like a slap in the face.

Braintree have offered it for years so this might be the final motivation to switch over.

It's been around 7 or 8 years with Stripe now and it just seems like they are prioritising heavily increasing their average revenue per user at all costs.


Yep, heard loud and clear. We've been investing a lot in USD (and other currency settlement) in various markets (and actually just released it in Singapore). We're working on it in Australia too.


Thanks for replying, I'm very excited to hear it's in the works!


won’t this eat into your revenue since you won’t make money on the exchange ?


People like GP who's revenue is eaten into by it not existing switching away from Stripe would also eat into its revenue.


Do you really need to set up a US entity just to open a US bank account?

If not, I would set up a US checking account and settle some portion stripe transactions there. Take that revenue and pay your US vendors. No currency conversions.


You can have a USD bank account, but Stripe won't pay out to it, they'll only pay out to an AUD account and take the 2% along the way.


use pin payments they can take and settle in USD just ask them and they are an AU company. similar fee structure as stripe 2.x + 30c


They also have way more business model exclusions and more onerous terms than Stripe.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: