This story was big news when it happened and I'm grateful to Andy Greenberg for writing this retrospective. Pretty much nothing has changed about security at companies in the last ten years that would foil this kind of attack. I mean maybe folks are a little smarter about catching spearfishing Office docs, and Flash exploits are now a thing of the past, but those are replaced by contemporary equivalents. And I'm sure conveniences like the not-quite-airgapped crucial equipment still persist. We truly have no idea how to secure systems on the Internet.
As for RSA, it came out a couple years later their products were compromised various ways by the NSA. Then a couple years later the NSA lost control of its own hacking tools with the infamous Shadow Brokers release. Not only is building secure systems hard, but the US government actively works to undermine its own companies' security.
> RSA executives told me that the part of their network responsible for manufacturing the SecurID hardware tokens was protected by an “air gap”—a total disconnection of computers from any machine that touches the internet. But in fact, Leetham says, one server on RSA’s internet-connected network was linked, through a firewall that allowed no other connections, to the seed warehouse on the manufacturing side.
As others have noted, no, it is not. But that doesn't boggle my mind...
What boggles my mind is that the seed machine and the intervening network and the firewall did not appear to have "scream loudly then shutdown when this threshold is exceeded" mitigations in place.
They were wise enough to have a single connection from the seed host to the seed requester. They were wise enough to limit the requester to one request every 15 minutes.
They only discovered that threshold was being exceeded when they logged in to that machine.
The firewall itself should have had detection and response capabilities to notice when calls were being made faster than that, and it should have had a third, dedicated warning connection to alert humans to the fact. The seed host should have had detection and response capabilities.
And, given the value of the asset, it would have been entirely reasonable to have a transparent bit of network gear doing the same, like a custom switch invisible to the request host.
Since the article didn't mention any of these things, and since it said that the high request rate was detected only by humans on the box, I'm going to assume they didn't have these, for reasons mysterious.
EDIT: Come to think of it, since that machine was being used to burn CDs, there should also have been strict limits with appropriate detection mitigations on what that machine could do outbound.
That configuration is more typically referred to as a bastion server (or bastion host, per Wikipedia).
Access between network segments, or to a protected host, is through a single specifically-hardened host. Through network traffic (natting or bridging) is typically disabled or at least not provided by default, though in practice, it's challenging to entirely prevent tunelling.
But no, it is not an air-gapped system. Likely a journalistic compromise as "bastion host" is a less familiar term to the public.
... and from the article, the description of "airgapped" apparently came from RSA management. That may have been their understanding. Todd Leetham pretty clearly understood otherwise.
"A firewall that only allows connections from one Internet-connected server" is quite literally not an air gap. It's one of those things that you look back on and wonder how in the world that decision was made.
seems totally unjustifiable to me. if they truly needed to keep backups of their customers' seeds, then send them through a data diode to an actually-airgapped tape deck. there's no reason to keep seeds on the machines where they were generated.
...also, since they detected the breach before the attackers got to the "seed warehouse," why did they try to tail them in real time? just pull power to the whole DC.
The only thing I can think of with the "tailing in real time" is that there was some journalistic license taken to spice up the story. Otherwise, you don't even need to pull power to the whole DC. Just cut off all Internet access.
"Moments later, his computer’s command line came back with a response: “File not found.” He examined the Rackspace server’s contents again. It was empty. Leetham’s heart fell through the floor: The hackers had pulled the seed database off the server seconds before he was able to delete it."
I get the compulsion to delete it, but deleting it wouldn't have provided any real comfort. You would have no idea if that was the only copy. So, delete it just in case it is, but it doesn't change what you would have to do afterwards...the master keys have to be assumed leaked.
I mean you're never not going to delete it when given the option. As tiny as the chances are, it could mean your downstream consumers end up safe. Although yes, it should not stop you going into full damage control mode.
It might make customers less safe: If you delete the file, you reveal to the attackers what you know. The attackers may move more quickly to exploit the seeds, and it may disrupt your investigation, on which your customers depend: The attackers may abandon that path and follow another one that you are unaware of.
> In the hours that followed, RSA’s executives debated how to go public. One person in legal suggested they didn’t actually need to tell their customers, Sam Curry remembers. Coviello slammed a fist on the table: They would not only admit to the breach, he insisted, but get on the phone with every single customer to discuss how those companies could protect themselves. Joe Tucci, the CEO of parent company EMC, quickly suggested they bite the bullet and replace all 40 million-plus SecurID tokens. But RSA didn’t have nearly that many tokens available—in fact, the breach would force it to shut down manufacturing. For weeks after the hack, the company would only be able to restart production in a diminished capacity.
> As the recovery effort got under way, one executive suggested they call it Project Phoenix. Coviello immediately nixed the name. “Bullshit,” he remembers saying. “We're not rising from the ashes. We're going to call this project Apollo 13. We're going to land the ship without injury.”
This is the sort of response that would increase my trust to choose this company in the future. This is not easy. Our human instincts naturally kick in to minimize our faults and protect ourselves. Choosing to put the customer first at risk of your own reputation is hard, but the right choice, even for your reputation in the end.
I don't know the contract with clients or anything but "didn’t actually need to tell their customers" seems totally illegal to me if the RSA is the provider of one of the most fundamental layer of security to large companies. It is a PR piece. Not saying that RSA likely did the best they could do to be secured from the hack, but this story is a pr piece.
There's plenty of examples supporting that. The earliest I remember was the Tylenol murders where 31 million bottles of tylenol were taken off the shelves:
Your link contained no evidence for this. A search of the NHS website also suggests no clear evidence [1]. Cancer Research (a respected UK charity) give a layman's summary (albeit focusing on ovarian cancer), stating no clear evidence and pointing out that there are far more serious risks to worry about [2].
If anyone else is wondering as to why there's so much human drama written in what should otherwise probably have been a normal retrospective, it's (at least by my judgment) to increase the likelihood that the article gets optioned for a film or tv series, etc.
Or at least auto-delete them after 30 days, in case a customer didn't get theirs, and needed it resent. Retention policies limit the blast radius when there is a problem.
Unlike U2F and similar specs there was no direct communication between the SecurID tokens and any other device, limiting the bandwidth to less entropy than necessary to validate public key signatures. That necessitated having a shared secret between the token and auth server.
> That necessitated having a shared secret between the token and auth server.
Yes, but why did there have to be a central server with the shared secret for every token on the planet?
The way the SecurIDs were designed, there was not way to plug into them, so there was no way to program them. So when you bought a batch you entered each serial number into your RSA auth server, which phoned home, and got the seed/secret.
Huge single point of failure.
TOTP (and HOTP before it) has a shared secret between the auth server and the token (software), but if Company X is hacked they don't get the secrets to Company Y:
> but why did there have to be a central server with the shared secret for every token on the planet?
Yeah, this struck me as a huge flaw. The breached system was used to create CDs full of IDs for customer deployment. For convenience the manufacturing system was almost but not fully air gapped. They retained the ID data in case the customer needed a copy in the future. However, keeping all of the IDs ever made on one system seems crazy.
If they had just deleted the data after backing it up to discrete offline media every week...
> If they had just deleted the data after backing it up to discrete offline media every week...
Data loss probably scared them more than risk of breach.
The real failure, after all, was not having the system actually airgapped. Aside from electromagnetic leakage through the power system there isn't much difference between spinning disks and tapes if they're not connected to anything else.
I would lose trust if I found out that they retained copies of my private cryptographic data. Isn't that shocking in a company as sophisticated as RSA?
Good on the author for noting that while common now, the practice of dumping passwords in memory (a la Mimikatz) was not common until some time after this attack.
> Since when were NDAs routinely limited to 10 years?
I'm not seeing an indication that it was routine, just that all the people involved happened to have 10 year NDAs in place. Might've been RSA-specific, potentially as a consequence of the breach or just an artifact of RSA's own policies; it's not actually mentioned. I'm also only familiar with 5 year NDAs.
I just went through all the Confidentiality agreements I could find in my mail spool and none of them had an explicit time limit. Is it normal for people to have 5-year NDAs, or even 10-years, for company secrets? How does that make sense? One of the main characters in this story had a tenure at RSA that exceeded 10 years.
It's standard-ish to be able to request a 10-year NDA for anything that your not part of / on your way out, e.g. I know people that have them for severance / mutual non-disparagement packages.
Lasership, Inc. v. Watson. Found a NDA unenforceable because the term was indefinite that the court deemed to be an unreasonable term for the agreement - It is important that contracts have a well-defined start date and end date.
An agreement cannot be perpetual.. A contract without a specific time period stated may potentially be terminable by either party with notice once the business relationship ends after a reasonable amount of time. So stating a number of years (less than 50) may strengthen the NDA in some respect - The "indefinite" NDA of an employee may possibly be terminated unilaterally by the employee with written notice after they left the job, so better for the employer to have them signing a new NDA at the time of separation giving a time period for the agreement.
> Multiple executives insisted that they did find hidden listening devices—though some were so old that their batteries were dead. It was never clear if those bugs had any relation to the breach.
Well that’s not exactly comforting. Who else might have had the keys to the kingdom?
The story was almost certainly written with media optioning in mind, hence the human drama. It's common with longform journalism, and it's common for the author as well. https://www.imdb.com/name/nm5200697/
"Please don't complain about website formatting, back-button breakage, and similar annoyances. They're too common to be interesting. Exception: when the author is present. Then friendly feedback might be helpful."
Every one of those choices were deliberate though. There was an aesthetic and style they were after. You can't say the same thing about flashing ads and text shuffling around as you scroll.
I've clubbed it out of the title above, and added it to HN's debaiting software, so in the future it will get automatically dropped. (Most of the time.)
It doesn't sound like it. Even our production servers hosted by a "rackspace" company have outgoing ports closed by default, we earn a tiny amount compared to RSA.
I know there will be reasons but honestly, the server should have been air-gapped or something. I can't imagine they need changing very often so why not copy it across the gap on a USB stick when you need it and leave it non-networked otherwise?
Of course, I know nothing about this organisation, it just sounds weird that a system that was so crucial was so vulnerable.
I'd bet that they had most ports blocked and the attackers used multi-stage tunnels. Details like that probably just don't make it into a Wired article.
As for RSA, it came out a couple years later their products were compromised various ways by the NSA. Then a couple years later the NSA lost control of its own hacking tools with the infamous Shadow Brokers release. Not only is building secure systems hard, but the US government actively works to undermine its own companies' security.
https://www.reuters.com/article/idUSBRE9BJ1C220131220?irpc=9... https://arstechnica.com/information-technology/2014/01/how-t...