Hacker News new | past | comments | ask | show | jobs | submit login
Anonymous speaks: the inside story of the HBGary hack (arstechnica.com)
345 points by abraham on Feb 16, 2011 | hide | past | favorite | 83 comments



Computer security is obscenely asymmetric - an attacker only has to find one flaw, once, somewhere. A defender needs to constantly monitor, test, review isolate and basically never make any mistakes.

It is easy to look at almost any intrusion and attribute it to poor defenses. If HBGary didn't have a SQL injection, they'd have had a XSS vuln. Or a employee would get spearphished. Or an attacker at a local coffee shop would compromise a mobile client. Or a backup service would get compromised and unencrypted. Or a interviewee could plant a network listening device. Or the CEO's daughter could win a pre-owned iPhone. Or a secretary gives out a VPN login. And so on and so on.

Did HBGary suck worse than usual? Possibly - but consider Google china got hit by ie6+acrobat vulns, DOD lost hundreds of thousands of classified documents from an air gapped and physically controlled system to a private, Open BSD may have included side channel backdoors, Kaspersky lost their source code, PS3/iPhone/Xbox/HTC etc. are unable to secure their platforms.

The truth is, a motivated attacker will rarely fail. Anyone reading this would be unlikely to survive 24 hours of a coordinated attack whether it's done by 16 year olds, chinese university students, russian mafia, FBI or simply nerds that know how to google vulnerabilities.

Fighting back against a group like Anonymous provides the same asymmetric warfare problems as the US military experiences in fighting terrorists, including the inability to respond with similar tactics for legal reasons.

Bottom line is, almost any organization can be subject to this kind of embarrassment without warning.


The 'real' story is that HBGary charges that big bucks to tell other companies and/or government agencies about how they aren't following security best practices, yet they themselves weren't doing so. I don't think that anyone would be ragging on HBGary for lax security if Anonymous had pulled out some 0day kernel exploit to break into HBGary's systems.

They failed in:

- Keeping their systems patched and up-to-date.

- Convincing/forcing their users to use strong passwords.

- Convincing/forcing their users to use separate passwords per system.

- Convincing/forcing their power users/admins to use a unique, strong password on key systems (i.e. Google Apps Admin).

- Not-invented-here syndrome (or maybe security through obscurity -- hey! if we use an obscure CMS then it won't be exploitable!) with respect to their CMS. I can be a little lax on them here. Had they chosen 3rd-party software people would doubtless be railing on them for which off-the-shelf 3rd party software they were using (e.g. had they been exploited through a Wordpress vuln, then people would be lambasting them for using Wordpress vs <insert-cms-here>).


The 'real' story is that a motivated attacker will rarely fail.

You can take almost any intrusion and write it up in wildly different ways.

If HBGary had not failed in everything that you listed, odds are you would be listing some other comparable set of failures:

- something somewhere is always unpatched and out of date

- humans always deviate from best practices

- 99.99% of intrusions involve traditional threats, well known vulnerabilities, unpatched systems and human error

I could write up 100 different intrusions done in 100 different ways and almost always make the victim sound incompetent, or like a real life spy novel, or make the defenses sound like fort knox, or make the intruders sound like gods, or make it sound like my product would have prevented them, or draw the conclusion that the security environment is hopeless and out of control.

In the end it doesn't really matter how it happened or what I make it sound like.

Bottom line: Did you get owned [Y/N]


Then again, a motivated defender is a very though adversary. Think "web server serves plain files only and is disconnected from the internal network"[1], "secure OSes everywhere"[2], "password quality checker installed"[3], "full-disk encryption for all serious data"[4], "SSH logins only via public key"[5], etc. Yes, this takes (some!) real effort. No, getting 0wned by "please drop the firewall and send me the root password" is not acceptable if you're a security outfit.

Really, you can go years without patching if you choose your software properly. (Except browsers - those just suck.)

[1] e.g. https://github.com/mojombo/jekyll [2] e.g. http://www.openbsd.org [3] e.g. http://www.openwall.com/passwdqc/ [4] http://www.openbsd.org/faq/faq14.html#RAID or the equivalent for other OSes [5] All over the internet. Or set up a Kerberos environment and get single-sign on too.


Bottom line: Did you get owned [Y/N]

As I've mentioned elsewhere, competent security needs to take into account sociological/economic analysis. Only looking at the technical and organizational side is literally just playing with yourself.

For example: if you're a major technology company, taking the step of publishing wildly popular content with DRM means you're going to be taking on a lot of highly motivated opponents. History demonstrates that this isn't a fight that you want to take on. Now consider: if you're a security company, what do you think is going to happen when you take on a subset of /b/?

Here's a hint: before you're in the position where you're risking the ire of a large, technically savvy population that's had demonstrated success taking on other corporations with comparable or greater resources than yourself and a history of flaunting the law, it really behooves you to do some preparation.

If you've been saving some chump change by keeping your mailserver on the same machine as your webserver, and you're about to take on /b/, now's the time to do something about it.

That's like some athlete not checking if his shoes are laced up properly before the event.

Did this security company ever audit its own security? Either they didn't or they did an incompetent job of that. Would you trust a security company that doesn't eat it's own cooking?


Not necessarily. It's easy to forget that attacks typically depend on, admittedly very common, multiple layers of security failures. For example, using the same passwords on many systems, etc. Good security is defense in depth. Keeping every component as secure as possible and keeping any breach of security as restricted as possible. It may not be possible to avoid every conceivable attack, but it's certainly possible to withstand a large number of attempted attacks.

The other side of the coin is that attackers don't publicize their losses. Every attacker has a limit to their skillset. If they can't compromise someone they won't announce to the world their failure, they'll just pretend like nothing happened and move on.


- Convincing/forcing their users to use separate passwords per system.

I have to disagree that this one is really a best practice at all. I have dozens of different accounts on different computer systems, if I didn't do at least some password reuse I would have a hard time writing them all down, and remembering them would be totally impossible.

With that said, I absolutely think passwords on critical systems must be unique and strong. But I have maybe three systems I consider critical and dozens that not. My bank password for instance is unique and long. But I am more worried about being able to remember my passwords than whether or not someone who gets into the account I use to play Go online can also get into the account I use to play chess online.


if I didn't do at least some password reuse I would have a hard time writing them all down, and remembering them would be totally impossible.

There are a variety of good password vault programs out there. I keep my passwords in a KeePass 1 file on Dropbox. I can run Keepass on Windows, Linux, OS X, and my iPhone and iPad. There are a variety of low-cost and free options that are about this good or better.

Sometime, I'll be working on a Keepass compatible iPhone program that can access Dropbox directly.


> I have to disagree that this one is really a best practice at all. I have dozens of different accounts on different computer systems, if I didn't do at least some password reuse I would have a hard time writing them all down, and remembering them would be totally impossible.

Yes, it's a pain in the ass (at present), but yes, you should be using different passwords for everything, and pubkey authentication where possible.

The problem of maintaining an encrypted master password list for many different accounts is just a technical one. It will be solved. Keyring managers already do this. I noticed the latest Chrome linux builds use the desktop keyring manager now for saved passwords, rather than storing them unencrypted in the browser's password store.

Personally, until these keyring managers are mature enough, I use a few simple scripts: one which generates a new semi-pronounceable password with random chars, one that adds a new account to a gpg-encrypted master password file, and one that queries the gpg-encrypted master password file when I've forgotten a password to an account.


You make good points and I think I may have to work more on using more distinct passwords.

Though, as you alude, this will become easier as keyring managers mature. I am also hoping that as security technology matures more places will move to forms for two factor authentication.


> Keeping their systems patched and up-to-date.

Which systems were these? I didn't see anything that implied they were compromised through a missing patch.

If you're referring to the CMS, then that could just be a bit of custom code. We don't know.


From page 2:

"The only way they can have some fun is to elevate privileges through exploiting a privilege escalation vulnerability. These crop up from time to time and generally exploit flaws in the operating system kernel or its system libraries to trick it into giving the user more access to the system than should be allowed. By a stroke of luck, the HBGary system was vulnerable to just such a flaw. The error was published in October last year, conveniently with a full, working exploit. By November, most distributions had patches available, and there was no good reason to be running the exploitable code in February 2011."


Thanks. I was getting confused about the root password with jussi's email.


There was apparently a privilege escalation from Greg Holund's ssh account on the support machine - leading to the rootkit.com data and further credentials.


Thanks for pointing that out - I missed it. Was thinking of rootkit.com.


When an attacker uses state of the art techniques to get through your security, you curse them and then redouble your efforts at security.

When an attacker uses rudimentary techniques that have been well known for many years and have straightforward and low-cost counter-measures, then you should rightfully be disgraced. More so if you are a security company.

It's not as though the attack against HBGary was like some expert safe-cracker routine. Rather, it was more similar to someone walking up to the front door, finding it locked, then finding a key under the doormat and letting themselves inside. There's no excuse for that. Not if you have any sort of obligation to maintain a level of security and secrecy.


I consider this instance to be a step worse, given that you have to actively work against the tools available to create an SQL injection vulnerability (or at least take extra steps to work around the easy way of operating).


I wouldn't go quite that far. In PHP, for example, it's still the most straightforward way to use dynamic sql statements built up as concatenated strings. It's easy enough to skip input sanitation here or there on accident.

That being said, there's absolutely no excuse for that sort of slap dash engineering today. It's dead simple, even in PHP, to use input sanitation, or to use parameter binding / prepared statements to avoid SQL injection vulnerabilities. Those sorts of best-practices have been well known for at least the last half decade.


I see these apologist comments on every HBGary article and, please, it's not rocket science.

When you call yourself a "security company" then it is not too much asked to please not expose a half-baked PHP Application to the public. It is not too much asked to have your team adhere to the most basic password practices.

A defender needs to constantly monitor, test, review isolate and basically never make any mistakes.

This is bordering on FUD.

No, as far as your network presence is concerned there is a very finite number of attack vectors. For most companies there is no reason to expose more than a very small set of services to the world. Hardening these services is well understood.

If I only open Port 22 and 80 to you, and the webserver will serve only static files, then you'll have a pretty damn hard time owning that box, unless you have access to very rare and precious remote exploits for the kernel, OpenSSH or nginx. And unless I make very basic mistakes in configuring these things.

Moreover good security is layered. It's absolutely ridiculous to try to come up with excuses for a security company having their CMS broken into and that being enough to effectively travel their entire network.

Any admin worth their salt will put the company wordpress on a separate server, with zero trust-relationship to the rest of the infrastructure. It's a no-brainer.

Yes, incompetence is widespread. But please call it out for what it is and don't try to come up with justifications.


Security it's not about being totally impenetrable, it's about being too expensive to be attacked.


Computer security is obscenely asymmetric - an attacker only has to find one flaw, once, somewhere. A defender needs to constantly monitor, test, review isolate and basically never make any mistakes.

That is something the IRA used to say, they only needed to get lucky once, whereas the police needed to get lucky all the time. Of course humiliating someone on the Internet is a world away from blowing up a shopping centre. If the consequences were more serious than embarrassment, then a lot more resource would go into guarding against it. Schneier talks about attack trees (http://www.schneier.com/paper-attacktrees-ddj-ft.html) - always look for the cheapest vulnerability.

Incidentally there is one online group who could eat Anonymous for breakfast - Mumsnet. If Anonymous ever took them on, they'd be grounded before you knew it.


  > Incidentally there is one online group who could eat Anonymous
  > for breakfast - Mumsnet. If Anonymous ever took them on, they'd
  > be grounded before you knew it.
For those less inclined, this seems like a joke as Mumsnet seems to be a UK online parenting community mostly consisting of mothers and presumably they would 'ground' Anonymous whom are supposedly just a bunch of punk kids.


Don't underestimate Mumsnet, the British government is terrified of them. Get them all pointed the same way and they are like a pack of angry she-wolves going for the wounded wildebeest of public policy. Think what they could do to any organization that doesn't have any real-world assets to protect it...


Amazing write up - this is one of the best pieces of technical journalism I that I think I've seen. There is no hype, it's informed, it's detailed - but not super technical, i.e. math showing password complexity to rainbow table size tradeoffs etc.

Any journalists out there, this is how it's done.


Ars Technica regularly comes out with good pieces like this. Out of all the tech news sites I've seen, they're generally the most professional (other than their photoshopped story images; those usually go for humor) and thorough in their coverage.


Shouldn't this be the standard, not the exception?


Noted!


Very well written article - it does a terrific job of explaining things like rainbow tables for a non-technical (or at least, technically-but-not-security-minded) audience. The only part that seems off is the theme that /all/ of the exploited vulnerabilities were necessary to render HBGary vulnerable:

"Even with the flawed usage of MD5, HBGary could have been safe..."

They homebrewed their own password system. Can someone switch on the tptacek bat-signal?


They homebrewed their own password system.

the story says hbgary hired an outside company to make this cms for them, which may explain the crappy security on that particular system.

Can someone switch on the tptacek bat-signal?

thomas' security company also got hacked a couple years ago and had sensitive information plastered all over a mailing list. rumor was that it happened via their use of wordpress for their weblog.

i guess the moral of the story is... you will get hacked by crappy third-party software?


> the story says hbgary hired an outside company to make this cms for them, which may explain the crappy security on that particular system.

Doesn't that make them look even more amateurish and incompetent? They chose an insecure content management system and, most importantly, they didn't isolate it enough. So penetrating that resulted in a complete penetration of their site.

If they were selling hand-made baskets, nobody would blame them, but they sell "security" and charge big bucks for it, so they deserve the ridicule.

It is an interesting perspective I guess on selling "security", both as a service and a product. One can charge lots of money, but unless there is a serious attack and penetration, it is hard to know what the quality of they security product is. Of course once the penetration happened, there is at best pity and at worst ridicule and blame.


Not really, they should've tested the site themselves and there's evidence that this actually happened on the main site but not federal. Normally a company contracts an external company to do the work for them and either asks the external company to independently check the security of the output or organises it themselves. In the case of federal it may have been the case that neither happened.

> If they were selling hand-made baskets, nobody would blame them, but they sell "security" and charge big bucks for it, so they deserve the ridicule.

I disagree. Anonymous were a highly motivated persistent attacker. It doesn't matter whether or not there was SQL injection involved, they'd just keep on going until they get in regardless. If there wasn't a SQL injection bug there'd be something else. Tptacek's company has been hacked into, our website got hacked into years ago (through having shared hosting - someone else had a SQL injection bug on the same box and the hackers defaced every site on the box. The difference is that we did a risk analysis beforehand and decided to never to store sensitive data there nor use the same credentials for that account anywhere else). Given a long enough timeline, everyone gets hacked. While the SQL injection bug was the way in, the real schoolboy error was Aaron Barr using a weak shared password for Google Apps admin.


> the real schoolboy error was Aaron Barr using a weak shared password for Google Apps admin.

I've been reading through some of this HBGary stuff, and I have come to the conclusion that Aaron Barr is kinda a dipshit.

Read the email analysis at http://www.wired.com/threatlevel/2011/02/spy/ and its filled with Aaron Barr "hacking" into people's facebook accounts and then posting pictures of their kids as if he made some awesome discovery.


someone else had a SQL injection bug on the same box and the hackers defaced every site on the box. The difference is that we did a risk analysis beforehand and decided to never to store sensitive data there nor use the same credentials for that account anywhere else

And that's precisely the difference everyone should look for when hiring a security company.


Doesn't that make them look even more amateurish and incompetent? They chose an insecure content management system and, most importantly, they didn't isolate it enough.

No more than google choosing a linux kernel with a privilege escalation bug for Android, anyone using OS X in 2009 while a remote jdk bug sat open for 6 months, anyone using windows+ie in dec '10 or jan '11.

Unless you can explain how to only buy software that will never have any vulnerabilities.


I understand the saying "the cobbler's children go barefoot;" if a security consulting company spent the man-hours to make sure their own systems were perfectly secure, they'd never have the spare time to bill any to their clients. Still, when making a trade-off between practicality and security, a security company should keep in mind the possible PR consequences.

This wasn't quite like Google choosing a linux kernel with a priv escalation bug or Apple leaving the JDK unpatched for 6 months. This was more like Google missing a great acquisition opportunity because they couldn't find the relevant documents on their internal fileserver, or Apple's website only rendering correctly in IE 5 because that's what they were using to test it.


Umm, if you're supposed to be a security guy, you shouldn't use IE+Windows, especially if there are publicly known vulnerabilities. You should also reconsider the use of OS X, and at least be able to follow instructions on how to disable the JDK. Etc.


Just means they've never had experience being attacked before. Always offense, never defense. In their minds, they never considered someone would have a reason to go after THEM.

They specialize in thinking up new ways to attack OTHERS, using OTHER peoples' tools. It's a huge problem in DC. A bunch of people telling other people what to do, without little idea or experience how to do it themselves.


To me this reads a clear message:

Security through obscurity doesn't work.

As soon as someone who knows what they're doing comes along, you're in trouble.


I would hope that this message was already crystal clear.

Security through obscurity isn't a replacement for other strategies. That doesn't mean that it's useless; just that if people are relying solely on it, then you can pretty much bet that they're screwed.


Sorry for being unclear. I wasn't saying he - or anyone - is perfect. What I meant by that last part was that the moral of the story, to me, seems to be summed up pretty well in the two blog posts tptacek wrote and links to at the bottom of his profile page. (And maybe also Schneier's oft-repeated comments about how a system is only as strong as its weakest link.)


Good TLDR: "So what do we have in total? A Web application with SQL injection flaws and insecure passwords. Passwords that were badly chosen. Passwords that were reused. Servers that allowed password-based authentication. Systems that weren't patched. And an astonishing willingness to hand out credentials over e-mail, even when the person asking for them should have realized something was up."


HBGary isn't anywhere near the only company to have security holes like this open. It's just worse because they're a security company and they happened to piss off Anonymous.

Getting employees or users not to reuse passwords is probably the hardest thing to do.

Also, Ars' coverage of this story has been great.


Mammoth global corporations use passwords like 'P@55w0rd' on production systems and open servers not blocked by a firewall that store product-code and build systems. This type of 'best practice failure' occurs everywhere.

For most, its like flossing every day. You know you should... but do you?


I don't floss every day, but I certainly wouldn't go to a dentist with obvious gingivitis.


One point in favor of requiring ssh keys for external access is that the users don't get to blow it on passwords. Though it does require sysadmin staff who are willing to walk users through the process of creating the keys --- and stubborn enough to explain that this is the procedure until following it becomes the path of least resistance.


It's weird. Once things are set up, it's so much nicer than the alternatives.


Changing passwords is a lot easier than changing keys.


Also, passwords are compromised much more often than keys. To get someone's SSH key you have to have access to their local workstation, which is probably going to be more troublesome than access to a colo'd server, if for no other reason than most workstations go to sleep after they've been inactive for a while (there are other reasons, though).

Also, changing keys isn't that hard. You just re-run ssh-keygen and delete the old key from authorized_keys and replace it with the new one.


replacing the private key is the hard part. it's the kind of thing where you don't discover that the new private key for your server isn't on your backup laptop until you need to login and don't have access to a system with they key.


That's not a technical problem tho', it's social/organizational. If you make passwords too complex and change too often and enforce it in software, you simply encourage people to write them down, save them in the browser, etc. Or people will be phoning the helpdesk every day to get a reset, and security as a whole will be discredited as a waste of time. NOTE: I'm not saying that it is a waste of time, but the best policies in the world are of no help if people simply refuse to follow them.

The truth is passwords are not like toothbrushes - they don't actually need replacing every three months. Only if some event has occurred (e.g. a sysadmin leaving the company). They don't get weaker over time. Why not let someone keep the same decently strong password for as long as they're an employee? I guarantee this will be actually more secure.


> They don't get weaker over time.

Passwords do get weaker all the time, to the extent that they are used in multiple places. Changing the password on different systems on different schedules discourages password reuse. It also means the 'active' password is much less likely to be the password the employee used on a random news site they logged into once to comment.

There is obviously a balance to be had, because frequent rotations may encourage people to choose weaker passwords, but there is certainly value in expiring passwords.


Changing the password on different systems on different schedules discourages password reuse.

But it doesn't, it really doesn't. It just results in people buttonholing sysadmins in the corridor asking "when are you going to stop dicking around and implement SSO?". Not long after that, people just start ignoring security advice altogether.


You are right about reuse across multiple internal systems not being strongly discouraged.

Where it does discourage reuse is across multiple systems, and with websites. Making me change my corporate password every 90 days is an effective way to ensure that I don't use my current password across a large number of websites. Maybe I'd go to the effort of making my gmail password the same as my corporate password. The password on that random news site account I forgot about? No ways.

It's not a panacea, but password expiry does effectively limit the spread of passwords in many cases.


Your last paragraph actually has a very good point - why do companies insist on changing the employee passwords every month? Chances are, if somebody got hold of your password, he/she is not going to wait a month before using that knowledge, right? So, really, I'm curious as to why most companies have this policy.


Passwords historically got weaker over time, but not in the sense that 'if he got your password hes not going to wait to use it'. They weaken in their hashed form. It used to be (still is, really) trivial to score the passwd file of a system, and get all the hashes of passwords, but no plaintext.

By changing passwords every N months, you eliminated the ability of someone to crack the hashes and obtain a cleartext password where they previously only had a hash.

That time window has gotten absurdly short, however...and with Pass the Hash and MITM, I don't even need your password anymore :(


Because, let's be honest, most "security consultants" are people who couldn't make it as sysadmins, and they've no idea what they're doing.


> It's just worse because they're a security company and they happened to piss off Anonymous.

It's just worse because HBGary isn't eating their own dog food. Think about that.


Company? Hell, government, military, it goes on and on how many vulnerable networks are out there.


Not in Britain, though, where there is no culture of carelessness: http://www.google.fi/search?q=british+lose+confidential+data


<rant>As others have said, the most serious thing is that it is a security company, and they seem to have ignored EVERY security best practice ever! Did they do anything right?

As a security specialist myself, I always do my very best to follow best practices, not only to protect myself, but to show others that I am willing to follow my own advice. I have met security people doing presentations that need to elevate to admin, and they are logged in as admin already. Sometimes even with UAC turned off. This completelly shatters my confidence in them. I am always logged in as normal user and have long (20+) passwords, and make sure people see that I have long passwords. I don't hold others to the same standard, but I know people. If they see that I use 20+ characters, they will not think that the 10 that I want them to use is that bad.

This is the way the security landscape should work. We should all set ourselves to much higher standards than the advice we give others. We should always follow up on it. We KNOW lazyness will cause breaches, so therefore we should never be lazy when it comes to security. For a security company - and especially the president - to have such low security lowers the confidence for the whole industry.

Yes, security is asymmetric. That is why companies must always follow at least the recommended best practices. If they are followed, the target might be too hard to break into and a hacker might go someplace else where it's easier to break in. Targeted attackers might still get it, but we should all make sure they have to work DAMN hard to succeed! If we start thinking that the attackars will succeed anyway, we might as well drop all defences. Display the admin passwords at the bottom of the "About us"-pages.

Bottom line, HBgary fucked up good. They showed the world that they give advice that they don't follow. They deserve the burn and anybody thinking about hiring them should think again. Even if they change the problems that allowed this breach, the basic problem is that they obviously don't understand security. If they did, none of this would have happened.

</end rant>


Wow. Did they do anything right?

I can understand a typical organization making most of these mistakes, but a security firm?


I don't mean to shatter your dream of how security firms are run, but on the whole, I'd bet we're no better than the industry at large.

This might be a "cobbler's kids shoes" issue, or just a general failure of people and process.

One of the only truisms I've found so far when dealing with breaches is that almost no one gets this right proactively. You almost have to be the victim of a breach (the more public the better) to actually rethink how your people/processes are implemented.

This seems true for the largest banks in the world, and the smallest security firms.

And it's easy to look back in hindsight and say "how could they have possibly had things set up that way?", but the truth is that this was not an opportunistic attack; if these vulnerabilities weren't present, then they'd have looked for others.

That's the problem with securing your environment, you have to get everything right, and the attacker only has to get one thing right. That being said, as with most "disasters", this was a series of cascading failures (like most airplane crashes, or oil rig explosions).

Hopefully they'll learn from this (assuming the negative fallout doesn't completely bankrupt the company).


I don't mean to shatter your dream of how security firms are run, but on the whole, I'd bet we're no better than the industry at large.

If it's ever possible for me to hire a security firm that has higher standards than this, I'm going to do that!


I agree with stcredzero. You need some standards if you're going to put yourself out there as a security company. I'm a random chick with some web programming and I know that you should iterative hash or salt your hashes. I also know you shouldn't use the same passwords, and what sql injections attacks are. Hey, maybe I should start a security company!


It's more of an issue with, "do they practice what they preach?" "Do they eat their own cooking?"

When people at a company don't do this, it's often a symptom. A friend of my girlfriend worked at an AT&T store. She could've gotten a huge discount on AT&T mobile? Her answer: no thanks.


I'm not pointing any fingers, but the security industry both on- and offline has long had problems with snake oil.


This is a very well written summary of how it all went down. From this, we can conclude:

* An initial entry point through SQL injection (from which we can infer that the federal site was probably never security tested)

* The use and re-use of weak administrative passwords

* The adoption of poor practices (such as sending passwords in cleartext via email) at rootkit.com

Of all of these, the attack would've probably been limited to the federal site had Aaron used decent strength administrative passwords (or just not reused them). The issue regarding the website comes down to risk ownership. If Aaron Barr was the risk owner for the website, it falls down to him. If in his contracting the company, Aaron had a plan to test the website then whomever tested it missed the SQL injection (which could've been implemented after a test by the CMS firm without him knowing, or any number of things). If Aaron didn't have a plan to get the site checked out, then that's both strikes falling on his shoulders. The third strike is quite clearly stirring the hornets nest.

Moral of the story: Don't antagonise groups that will strike back without a plan to deal with it.

Moral #2: Don't reuse passwords.

Moral #3: Retest your app on every significant change (including initial deployment).


SQL injection and MD5... on a "security" company? In 2011?

I'm sorry but thats just egregious. Thats like being a bodyguard and not even putting a lock on your own house.

The rest of the attacks could have happened to anyone. We all know its best practice to use many different passwords but most don't because its more convenient to only have one or a few. And if the email is coming from the email address it should you could brain fart and give up the info without thinking.

But the first two parts of the attack should NOT have been possible for them to even pretend to call themselves a computer "security" firm in 2011


You are completely wrong to justify people using the same passwords in multiple places because it is convenient.


Understanding a behaviour isn't the same as justifying it.


I just took an introductory course in web security. This explained many terminologies that fly over my head on HN.


This really makes the case for much more public-key cryptography everywhere -- if all of the emails between HBGary, even internally only, were encrypted, HBGary would have gotten out with just a small DDOS and been meandering along just fine today. I think that people that run a computer security company should at least be able to figure out Enigmail.


> This really makes the case for much more public-key cryptography everywhere

This is what we do. We use Google Apps so we used a combination of existing policy, crypto and user awareness. It's not the use of email that's an issue, it's the how the data is stored. If it's encrypted with good crypto it's not a problem. If it's encrypted with bad crypto or no crypto then the extent of the problem is down to the data.

As an aside, while you would want to encrypt anything sensitive, that doesn't mean you need to encrypt everything - it certainly makes conversations over smartphones more difficult, and google chats wouldn't be encrypted.

Still, a little common sense goes a long way.


not sure how e-mail encryption would have helped... ? They SQL injected and got the DB, obtained the passwords and then proceeded further (social engineering: FW policy change, ssh password through e-mail, etc.)


If you protect your keys well enough, you can protect the content of your emails. If they're stored on an IMAP server, downloading them will do you no good without the keys. Additionally, compromising a single machine may only yield the key to some subset of a company's emails.


It would help because the private keys needed to decrypt the emails would not have been kept on the server, and, even if they were, they'd still need a passphrase to get the content of the private key (though, it could have been the same insecure passphrase used elsewhere).

Also, a common policy of encrypting and signing emails would have stopped the social engineering attack completely, as the sysadmin would've known not to accept an unsigned request to give out passwords.

Kind of mind boggling that people don't do this generally already.


One: the root password to the machine running Greg's rootkit.com site was either "88j4bb3rw0cky88" or "88Scr3am3r88".

There must be more to it than this. If you know it's one of two passwords, why bother asking - couldn't you just try both? (In retrospect, maybe it was to give Jussi confidence that he was communicating with the real Greg? [Who else, after all, would know the root passwords?])


That was the root password, and remote root was disabled, so they had to social engineer an account that could remote in over ssh so they could su to do the real damage.


root passwords shouldn't get you far though.

I once published all my root passwords on IRC as a challenge and didn't change them for a few weeks.

Nothing happen.


  HBGary used Google Apps for its e-mail services
So, this high flying super duper high tech hardcore company actually uses an external email provider for their (I suppose) super secret emails?

The more I learn about this incident the more Mr. Barr and HBGary look like a bunch of amateurish dolts that may give good power point presentations, but I for one sure wouldn't take my security business there.


We use Google Apps for our e-mail and we're a security company not entirely dissimilar to HBGary (the main company, not federal, and in terms of services, not practices).

We moved to Google Apps as they bought Postini. We had an adult discussion of the benefits and drawbacks, and as everyone had PGP made it a straightforward job of encrypting things according to policy. Google Apps (for business, at least) offers SSL encryption on everything and offers relatively little by way of additional risk compared to using someone like messagelabs for your AV. Obviously they're storing the data for you, but you'd get that with any hosted provider.


FYI, it looks like Anonymous is aiming for total humiliation as far as HBG is concerned - they have even set up a pretty web interface/search for the entire collection of stolen emails - see http://hbgary.anonleaks.ru/





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: