I agree that this is probably about persistence. Initially I thought the developer was playing the long-con to dump some crypto exchange and make off with literally a billion dollars or more.
But if that was the case they wouldn't bother with the key. It'd be a one-and-done situation. It would be a stop-the-world event.
It's worth also noting that the spycraft involved a coordinated harassment campaign of the original maintainer, with multiple writing styles, to accelerate a transition of maintainership to the attacker:
At first I thought the guy who did this was a lone wolf but now I believe it was indeed state actor. They coordinated and harassed original maintainer into giving them access to the project, basically they hijacked the open source project. The poor guy(the original maintainer) was alone against state actor who was persistent with the goal of hijacking and then backdooring the open source project.
It seems like they were actively looking[0] which open source compression library they can inject with vulnerable code and then exploit and backdoor afterwards.
> At first I thought the guy who did this was a lone wolf but now I believe it was indeed state actor. They coordinated and harassed original maintainer into giving them access to the project, basically they hijacked the open source project. The poor guy(the original maintainer) was alone against state actor who was persistent with the goal of hijacking and then backdooring the open source project.
It could just as easily be a lone wolf capable of writing in different styles (really not difficult if one spends enough time online, one gets exposed to all sorts of styles) or even just your run of the mill criminal gang.
In fact, "multiaccount" cheating (where a single person has multiple accounts that interact with one another in a favourable manner, e.g. trading for stuff for reduced fees, or letting themselves be killed to pad stats, or expressing opinions leaving the impression of mass support, etc.) has been present in many MMO games for more than a decade. I recall an online game I was really into back in high school, eRepublik, which simulated real world politics, economics and warfare (in a really dumbed down version of course), and multiacounts were especially prevalent in politics, where candidates for election would often be padded by fake groups, or even entire countries being taken over by a gang of real people with like ten accounts each.
The complexity is nothing new. The only indicator this could be a state actor is the long con aspect of it. The attacker(s) were looking into years between starting and actually being able to exploit.
> “Recently I've worked off-list a bit with Jia Tan on XZ Utils and”
> "In 2021, JiaT75 submitted a pull request to the libarchive repository with the title ‘Added error text to warning when untaring with bsdtar’ which seemed legitimate at first glance. "
Reading that link, it seems like the vulnerability is that a file name gets printed, so you can add terminal control characters in a file name and have it printed.
Ugh, that this psyops sockpuppetry may have started or contributed to the maintainer's mental health issues seems like the most depressing part of all this. Maintaining OSS is hard enough.
I hope that one takeaway from this entire situation is that if you're a maintainer and your users are pushing you outside your levels of comfort, that's a reflection on them, not on you - and that it could be reflective of something far, far worse than just their impatience.
If you, as a maintainer, value stability of not only your software but also your own mental health, it is entirely something you can be proud of to resist calls for new features, scope increases, and rapid team additions.
Yeah, that was a hard read. I think it highlights the importance of emotionally distancing yourself from your projects.
It's also interesting because it exploits the current zeitgeist when it comes to maintainers' responsibilities to their users and communities. Personally, I think it puts too much expectation on maintainers' shoulders, especially when they're working for free.
Personally, I think maintainers are doing favors for users, and if they don't like how the project is progressing or not, then too bad. That's not a popular sentiment, though.
Probably didn't start them, considering that he already mentioned (long-term) mental health issues in the mailing list discussion in which the (likely) sock puppets started making demands.
But it's hard to see the whole thing helping, and it is some combination of depressing and infuriating. I hope he's doing ok.
I think the most convincing case made about the sock puppets is around account creation dates, and also people disappearing after they get what they need. Like Jigar disappearing after Jia becomes maintainer. Or the guy "misoeater19" who creates his debian bug tracker account to say that his work is totally blocked on needing xz 5.6.1 to be in debian unstable.
>But if that was the case they wouldn't bother with the key. It'd be a one-and-done situation. It would be a stop-the-world event.
Why not? It's possible someone else could've discovered the exploit before the big attack but decided not to disclose it. Or that they could've disclosed it and caused a lot of damage the attacker didn't necessarily want. And they easily could've been planning both a long-term way to access a huge swath of machines and also biding their time for a huge heist.
They have no reason to not restrict the backdoor to their personal use. And it probably is spycraft of some sort, and I think more likely than not it's a nation-state, but not necessarily. I could see a talented individual or group wanting to pull this off.
I think we need to consider the context. The attacker ultimately only had control over the lzma library. I'm skeptical that there's an innocent looking way that lzma could have in the open introduced an "accidental" RCE vuln that'd affect sshd. Of course I agree that they also wanted an explicit stealth backdoor for all the other reasons, but I don't think a plausibly deniable RCE or authentication bypass vuln would have even been possible.
Inserting a change like this as a one off would cause lots of scrutiny, which would probably get it detected. Instead, the bad actor spent years contributing to the project before dropping this.
So, while writing the exploit might be a couple of hours work, actually pulling it off is quite a bit more difficult.
I don't think they were using complexity as the reason for that assumption, but instead goals. Adding security doesn't require a nation state's level of resources, but it is a more attractive feature for a nation state that wants to preserve it over time and prevent adversaries from making use of it.
This makes sense in closed source products where you'll never get to audit the source for such exploits, but little sense in open source projects where anyone can audit it.
That's to say an enterprise router or switch would likely have secured exploits put there by corporate and national security agencies, whereas open source exploits would benefit from the probable deniability.
And on the contrary, creating a vulnerability that’s not identifiable to a limited attack group provides for a bit more deniability and anonymity. It’s hard to say which is more favorable by a nation-state actor.
An interesting angle: if this was somehow observed in use instead of getting discovered without observing use, there would be a glaring bright cui bono associated with what it was being used for.
Theoretically, I could even imagine that public key security motivated by some "mostly benign" actor who fell in love with the question "could I pull this off?" dreading the scenario of their future hacker superpower falling in the wrong hands. It's not entirely unthinkable that an alternative timeline where this was never discovered would only ever see this superbackdoor getting used for harmless pranks, winning bets and the like. In this unlikely scenario, the discovery, through the state actor and crime syndicate activities undoubtedly inspired by the discovery, would paradoxically lead to a less safe computing world than the alternative timeline where the person with the key had free RCE almost everywhere.
It'll be kind of tragic if this backdoor turns out to be the developer's pet "enable remote debugging" code, and they didn't mean for it to get out into a release. ;)
But if that was the case they wouldn't bother with the key. It'd be a one-and-done situation. It would be a stop-the-world event.
Now it looks more like nation-state spycraft.