Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The full story of the RSA hack can finally be told (wired.com)
196 points by whiteyford on May 20, 2021 | hide | past | favorite | 81 comments


This story was big news when it happened and I'm grateful to Andy Greenberg for writing this retrospective. Pretty much nothing has changed about security at companies in the last ten years that would foil this kind of attack. I mean maybe folks are a little smarter about catching spearfishing Office docs, and Flash exploits are now a thing of the past, but those are replaced by contemporary equivalents. And I'm sure conveniences like the not-quite-airgapped crucial equipment still persist. We truly have no idea how to secure systems on the Internet.

As for RSA, it came out a couple years later their products were compromised various ways by the NSA. Then a couple years later the NSA lost control of its own hacking tools with the infamous Shadow Brokers release. Not only is building secure systems hard, but the US government actively works to undermine its own companies' security.

https://www.reuters.com/article/idUSBRE9BJ1C220131220?irpc=9... https://arstechnica.com/information-technology/2014/01/how-t...


> RSA executives told me that the part of their network responsible for manufacturing the SecurID hardware tokens was protected by an “air gap”—a total disconnection of computers from any machine that touches the internet. But in fact, Leetham says, one server on RSA’s internet-connected network was linked, through a firewall that allowed no other connections, to the seed warehouse on the manufacturing side.

That's not really an air gap, isn't it?


As others have noted, no, it is not. But that doesn't boggle my mind...

What boggles my mind is that the seed machine and the intervening network and the firewall did not appear to have "scream loudly then shutdown when this threshold is exceeded" mitigations in place.

They were wise enough to have a single connection from the seed host to the seed requester. They were wise enough to limit the requester to one request every 15 minutes.

They only discovered that threshold was being exceeded when they logged in to that machine.

The firewall itself should have had detection and response capabilities to notice when calls were being made faster than that, and it should have had a third, dedicated warning connection to alert humans to the fact. The seed host should have had detection and response capabilities.

And, given the value of the asset, it would have been entirely reasonable to have a transparent bit of network gear doing the same, like a custom switch invisible to the request host.

Since the article didn't mention any of these things, and since it said that the high request rate was detected only by humans on the box, I'm going to assume they didn't have these, for reasons mysterious.

EDIT: Come to think of it, since that machine was being used to burn CDs, there should also have been strict limits with appropriate detection mitigations on what that machine could do outbound.


That configuration is more typically referred to as a bastion server (or bastion host, per Wikipedia).

Access between network segments, or to a protected host, is through a single specifically-hardened host. Through network traffic (natting or bridging) is typically disabled or at least not provided by default, though in practice, it's challenging to entirely prevent tunelling.

But no, it is not an air-gapped system. Likely a journalistic compromise as "bastion host" is a less familiar term to the public.

https://en.wikipedia.org/wiki/Bastion_host


... and from the article, the description of "airgapped" apparently came from RSA management. That may have been their understanding. Todd Leetham pretty clearly understood otherwise.


"A firewall that only allows connections from one Internet-connected server" is quite literally not an air gap. It's one of those things that you look back on and wonder how in the world that decision was made.


seems totally unjustifiable to me. if they truly needed to keep backups of their customers' seeds, then send them through a data diode to an actually-airgapped tape deck. there's no reason to keep seeds on the machines where they were generated.

...also, since they detected the breach before the attackers got to the "seed warehouse," why did they try to tail them in real time? just pull power to the whole DC.


The only thing I can think of with the "tailing in real time" is that there was some journalistic license taken to spice up the story. Otherwise, you don't even need to pull power to the whole DC. Just cut off all Internet access.


They must have meant the executives had an air gap between their ears.


Seems more like an attempted air gap :facepalm:


"Moments later, his computer’s command line came back with a response: “File not found.” He examined the Rackspace server’s contents again. It was empty. Leetham’s heart fell through the floor: The hackers had pulled the seed database off the server seconds before he was able to delete it."

I get the compulsion to delete it, but deleting it wouldn't have provided any real comfort. You would have no idea if that was the only copy. So, delete it just in case it is, but it doesn't change what you would have to do afterwards...the master keys have to be assumed leaked.


I mean you're never not going to delete it when given the option. As tiny as the chances are, it could mean your downstream consumers end up safe. Although yes, it should not stop you going into full damage control mode.


It's moderately likely the employee in fact deleted that copy and the "disappeared seconds before it happened" is a fig leaf or polite fiction.

Have to assume the data were compromised anyway, as you point out.


It might make customers less safe: If you delete the file, you reveal to the attackers what you know. The attackers may move more quickly to exploit the seeds, and it may disrupt your investigation, on which your customers depend: The attackers may abandon that path and follow another one that you are unaware of.


> In the hours that followed, RSA’s executives debated how to go public. One person in legal suggested they didn’t actually need to tell their customers, Sam Curry remembers. Coviello slammed a fist on the table: They would not only admit to the breach, he insisted, but get on the phone with every single customer to discuss how those companies could protect themselves. Joe Tucci, the CEO of parent company EMC, quickly suggested they bite the bullet and replace all 40 million-plus SecurID tokens. But RSA didn’t have nearly that many tokens available—in fact, the breach would force it to shut down manufacturing. For weeks after the hack, the company would only be able to restart production in a diminished capacity.

> As the recovery effort got under way, one executive suggested they call it Project Phoenix. Coviello immediately nixed the name. “Bullshit,” he remembers saying. “We're not rising from the ashes. We're going to call this project Apollo 13. We're going to land the ship without injury.”

This is the sort of response that would increase my trust to choose this company in the future. This is not easy. Our human instincts naturally kick in to minimize our faults and protect ourselves. Choosing to put the customer first at risk of your own reputation is hard, but the right choice, even for your reputation in the end.


I don't know the contract with clients or anything but "didn’t actually need to tell their customers" seems totally illegal to me if the RSA is the provider of one of the most fundamental layer of security to large companies. It is a PR piece. Not saying that RSA likely did the best they could do to be secured from the hack, but this story is a pr piece.


There's plenty of examples supporting that. The earliest I remember was the Tylenol murders where 31 million bottles of tylenol were taken off the shelves:

https://en.wikipedia.org/wiki/Tylenol_(brand)#1982_Chicago_T...

In the end, their brand got stronger and more trustworthy.

(and I believe we got sealed bottles)


Looks like Tylenol is/was owned by J&J... the same company that knowingly sold talcum powder that caused cancer:

https://www.npr.org/2020/05/19/859182015/johnson-johnson-sto...


> sold talcum powder that caused cancer

Ummm

Your link contained no evidence for this. A search of the NHS website also suggests no clear evidence [1]. Cancer Research (a respected UK charity) give a layman's summary (albeit focusing on ovarian cancer), stating no clear evidence and pointing out that there are far more serious risks to worry about [2].

[1] https://www.evidence.nhs.uk/search?om=[{%22ety%22:[%22Inform...

[2] https://www.cancerresearchuk.org/about-cancer/causes-of-canc...


Just imagine you knowingly let parent threat their baby's with asbestos...and what happened? Nothing

https://www.cancer.org/cancer/cancer-causes/talcum-powder-an...


Tamper-proof. Yes, that was the example that occurred to me.


> Tamper-proof

You probably see "tamper-resistant" more often.


I would say it is "tamper-evident" as you can tell if the seal has been broken.


If anyone else is wondering as to why there's so much human drama written in what should otherwise probably have been a normal retrospective, it's (at least by my judgment) to increase the likelihood that the article gets optioned for a film or tv series, etc.

This isn't a first for Andy Greenberg, either. https://www.imdb.com/name/nm5200697/

(My comment isn't critical of his writing; it's merely an effort at explaining it.)


I mean, I also found the story much more engaging because it involved actual humans with human feelings and reactions.


It was a bit long-winded but I was thinking that it would make a great movie already in the first half.

Now I wonder if there are any cool SecOps movies or tv shows out there(?)


Mr Robot is somewhat realistic and very good, too.


Sir, I believe you forgot to add /s to your message.


I would think it would be better to have a provisioning design that did not require that the company retain the seed data for every fob they had sold.


Or at least auto-delete them after 30 days, in case a customer didn't get theirs, and needed it resent. Retention policies limit the blast radius when there is a problem.


Or move them to cold storage periodically that's air gapped and they are transferred via sneakernet encrypted.


And a wise business decision since you would also benefit from having to replace the client's fobs in the event that they lost their seeds.


Unlike U2F and similar specs there was no direct communication between the SecurID tokens and any other device, limiting the bandwidth to less entropy than necessary to validate public key signatures. That necessitated having a shared secret between the token and auth server.


> That necessitated having a shared secret between the token and auth server.

Yes, but why did there have to be a central server with the shared secret for every token on the planet?

The way the SecurIDs were designed, there was not way to plug into them, so there was no way to program them. So when you bought a batch you entered each serial number into your RSA auth server, which phoned home, and got the seed/secret.

Huge single point of failure.

TOTP (and HOTP before it) has a shared secret between the auth server and the token (software), but if Company X is hacked they don't get the secrets to Company Y:

* https://en.wikipedia.org/wiki/Time-based_One-Time_Password


> but why did there have to be a central server with the shared secret for every token on the planet?

Yeah, this struck me as a huge flaw. The breached system was used to create CDs full of IDs for customer deployment. For convenience the manufacturing system was almost but not fully air gapped. They retained the ID data in case the customer needed a copy in the future. However, keeping all of the IDs ever made on one system seems crazy.

If they had just deleted the data after backing it up to discrete offline media every week...


> If they had just deleted the data after backing it up to discrete offline media every week...

Data loss probably scared them more than risk of breach.

The real failure, after all, was not having the system actually airgapped. Aside from electromagnetic leakage through the power system there isn't much difference between spinning disks and tapes if they're not connected to anything else.


It's great that the system that was printing CDs somehow had to be internet connected, it's not like they are emailing these keys


Both HOTP and TOTP are vulnerable to phishing, unlike U2F.


"Perfect is the enemy of good."

I'll take whatever improvements I can get in security.


I would lose trust if I found out that they retained copies of my private cryptographic data. Isn't that shocking in a company as sophisticated as RSA?


> Now, staring at the network logs on his screen, it looked to Leetham like these keys to RSA’s global kingdom had already been stolen.

That must've been a sickening feeling.


Good on the author for noting that while common now, the practice of dumping passwords in memory (a la Mimikatz) was not common until some time after this attack.



Since when were NDAs routinely limited to 10 years?


> Since when were NDAs routinely limited to 10 years?

I'm not seeing an indication that it was routine, just that all the people involved happened to have 10 year NDAs in place. Might've been RSA-specific, potentially as a consequence of the breach or just an artifact of RSA's own policies; it's not actually mentioned. I'm also only familiar with 5 year NDAs.


I just went through all the Confidentiality agreements I could find in my mail spool and none of them had an explicit time limit. Is it normal for people to have 5-year NDAs, or even 10-years, for company secrets? How does that make sense? One of the main characters in this story had a tenure at RSA that exceeded 10 years.


It's standard-ish to be able to request a 10-year NDA for anything that your not part of / on your way out, e.g. I know people that have them for severance / mutual non-disparagement packages.


Could it be a matter of state law?


I've got an equal mix of 5 year and unrestricted NDAs, no 10 year NDAs, oddly enough.


Lasership, Inc. v. Watson. Found a NDA unenforceable because the term was indefinite that the court deemed to be an unreasonable term for the agreement - It is important that contracts have a well-defined start date and end date.

An agreement cannot be perpetual.. A contract without a specific time period stated may potentially be terminable by either party with notice once the business relationship ends after a reasonable amount of time. So stating a number of years (less than 50) may strengthen the NDA in some respect - The "indefinite" NDA of an employee may possibly be terminated unilaterally by the employee with written notice after they left the job, so better for the employer to have them signing a new NDA at the time of separation giving a time period for the agreement.


My thoughts exactly. I'm curious if these people thought their NDAs expired after 10 years.


Some searching tells me 1 to 5 years is considered normal.


> Multiple executives insisted that they did find hidden listening devices—though some were so old that their batteries were dead. It was never clear if those bugs had any relation to the breach.

Well that’s not exactly comforting. Who else might have had the keys to the kingdom?



This story would be many times more interesting with more technical details and less "human drama".


The story was almost certainly written with media optioning in mind, hence the human drama. It's common with longform journalism, and it's common for the author as well. https://www.imdb.com/name/nm5200697/


Wired is completely full of shit, just like EFF. FBI fronts, nothing but lightning rod ops.

RSA SecureID has been compromised for decades, way before this puff piece of sensationalist sell newspapers bit.


Are the seeds large primes?


No, it's a symmetric system, they're just random values.


non-paywalled link?


Disable javascript



Open link in duckduckgo


Should have just turned off their computers


I don't have an ad-blocker on this machine and I couldn't finish reading the page. The ads are ridiculously obnoxious.

It wasn't that long ago when magazines like Wired cared a great deal about the page. What a mess.


"Please don't complain about website formatting, back-button breakage, and similar annoyances. They're too common to be interesting. Exception: when the author is present. Then friendly feedback might be helpful."

https://news.ycombinator.com/newsguidelines.html

(This guideline isn't there because such complaints are wrong or inaccurate; just the opposite.)


> It wasn't that long ago when magazines like Wired cared a great deal about the page

Do you remember the super-early Wired print editions, because those colors made it barely readable.


Every one of those choices were deliberate though. There was an aesthetic and style they were after. You can't say the same thing about flashing ads and text shuffling around as you scroll.


I think the aesthetic they are going for these days is "profitable".


Install uBlock origin and block the javascript running on the page.

You still get to read the article, but no "ridiculously obnoxious" ads appear anywhere.


They still do, if you pay.


That's not true. I just subscribed and they obnoxious ads and reflowing text are still there.

Edit: I had to sign out and then back in after paying. Now the only ads I see are in-house ones (they really want you to sign up for newsletters).


I see no ads as a subscriber and the subscription terms explicitly states ad-free, unlimited browsing for 1 year for $10, subscription benefits listed here: https://subscribe.wired.com/subscribe/wired/121743?source=AM...


I signed out and then signed back in and now they are gone. Thanks.


Man do I ever hate the word "stunning" in headlines.


I've clubbed it out of the title above, and added it to HN's debaiting software, so in the future it will get automatically dropped. (Most of the time.)


Thanks, dang. It's like Christmas came early!


fd


"single, well-protected server"

It doesn't sound like it. Even our production servers hosted by a "rackspace" company have outgoing ports closed by default, we earn a tiny amount compared to RSA.

I know there will be reasons but honestly, the server should have been air-gapped or something. I can't imagine they need changing very often so why not copy it across the gap on a USB stick when you need it and leave it non-networked otherwise?

Of course, I know nothing about this organisation, it just sounds weird that a system that was so crucial was so vulnerable.


I'd bet that they had most ports blocked and the attackers used multi-stage tunnels. Details like that probably just don't make it into a Wired article.


The article explains all of this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: