Summary: The author of the article has a web app that allows you to administer programming tests to job applicants. The two languages currently supported are C and C++. As part of that process, (I'm guessing) the code needs to go through some sort of compilation process.
The author of the article had left a default whitelist on the production box that whitelisted /dev/random as being readable. The attacker compiled a program (took a test I'm guessing) that included /dev/random. The author found out based on ip and sheer luck that the attacker was a user of the system that he had corresponded with previously.
The reason I summarized this is because I wasn't totally clear on what codeboff.in was from the article. It makes sense how an attack like this could happen. The author seems to really know his stuff and the attack vector really was a small oversight. It's pretty unsettling to me, though, how persistent/insane this attacker seems.
From the perspective of an information security researcher with plenty of time spent working on both sides of the issue, I can tell you that any attacker worth their salt will be persistent and use multiple attack vectors to get what they want. This holds true for bored teens, corporate spies and real penetration testers (not just dorks that get paid to run scanning tools on your network) alike.
I've spent the last decade doing mostly IT Security defense work. I don't worry when I see someone port-scan my servers. Those kiddos are almost always benign. I worry when I see that our IDS starts to correlate many subtle attacks over a long period of time from similar places (such as the same school, ISP, etc)
Pretty good system to correlate different events to get a better picture of a potential intrusion. If you're familiar with Snort I recommend checking it out.
I already have a commercial system (with some homebrew, of course) that correlates, but I absolutely love playing with new things. I'll get this going in a lab soon. Thanks for sharing!
That's interesting. I've never admin'ed a server before so I hadn't thought about this at all.
It'll probably be useful to me to keep more verbose access logs and hash IP addresses to track what individual users (at least those not behind a group firewall) are up to.
I don't generally talk in public forum about the specific products I use, but if you want to play with an IDS at home: snort or suricata would be good toys to play with.
In the enterprise, there are so many IDS/IDP and SIEM products out there, and they're all different. Some are great for mixed environments of Windows, UNIX and infrastructure. Others are better for specific platforms. Your best bet is to get demo licenses and start playing with them instead of relying on word-of-mouth or shiny pamphlets from marketing-types.
Keep in mind I've played both sides of this game. People who know how attacks work in the wild can usually spot things that might not raise the ire of average sysadmins.
No, I'm saying an attacker would also have to find and destroy the immutable central logging store that all the stuff goes to. They'd also have to reach through the sands of time to get the stuff that's already been blowing up my phone. Get over yourself.
It's worth mentioning that no one is perfectly secure, either. A sufficiently bored, persistent and motivated attacker WILL find some kind of way in. I'll give you that. I've seen it play out enough times. But with centralized logging, transaction shipping for hot-site replication, and sufficient effort placed on separation of duties, an organization can minimize the impact a successful attacker can have, and can ensure that all audit trails remain intact in some way or another.
I am over myself. Note that everything I replied with was a question, and that I haven't had to proceed anything I have said with something like 'Keep in mind I've played both sides of this game'.
Anyway, to go on (keep in mind that I am genuinely interested in what you have to say). Isn't it true though that your IDS is only as good as your signature db? So if there was sshd or ftpd 0day that wasn't generic (generic shellcode etc.) that in theory you could spawn a shell without it ever being logged?
It's true. You could. But that shell and the activities therein would get logged in places that would make it hard for most attackers to cover their tracks. Mandatory Access Control and privilege separated daemons should also severely limit what that shell can do.
I'm not trying to have a pissing contest, but the reason we have defense in depth isn't to get 100% secure. You and I both know it's impossible. It's also there to preserve evidence if the unthinkable occurs. I do genuinely appreciate the forward thinking, so I'm sorry if I was a tad harsh in my reply above. Your responses (and your HN bio) came off a tad skiddie-ish at first blush. Being in infosec, where "Hacker" has a different meaning and stigma than "Hacker" in HN was originally meant to convey, I'm actually glad there are threads where Infosec "hacking" discussions can occur without getting completely buried by haters.
It's ok and thanks, I know I can also come across as terse/dickish sometimes. My profile is intentionally vague just because I prefer being anonymous (this is my 5th or 6th profile on here).
I worked in infosec (on all 3 sides, if you know what I mean) from the mid to late 90s, and then became disillusioned with the entire industry and left for greener pastures. I still attempt to keep on top of things (and have done the odd contract job here and there) but I am not all that up with everything going on.
Wrt the topic, what you are saying is that even with my /bin/sh running in the context of whoever sshd or ftpd are running as, by the time I figure out a local escalation, by then the activity on that shell has already been sent to another machine and onto your phone etc?
That was, of course, hyperbole (about blowing up my phone) however on certain very sensitive systems I do have a kernel module that provides a wrapper to execve() and that goes directly to a remote logging server and gets replicated. Yes, it's fucking noisy, but storage is cheap and databases are searchable.
As you well know, a careful individual can evade it if they know it's there, but the initial prodding would get logged.
Yeah, it took reading the entire article to figure out what the site in question is for and what it does that allows regular users to reinstall the kernel on one of his machines.
The first several times I read the term "code evaluator", it never occurred to me that he really meant evaluator as in executing user-supplied code on his server. Holy yikes!
Reading the rest of the article, it does seem that he's at least thinking about how to deal with the ramifications of that decision. It certainly takes some boldness to try this at all. Personally I would have got as far as "executing user-supplied code", thought about it a second, shivered at the implications, and picked a different idea to run with.
That blog post is one in a series of ongoing commentary about the site, so I don't usually explain the idea again. I guess it really is confusing out of context.
The code evaluation runs in a standalone server. I hope to learn what to block over time against a series of successful exploits. Evaluations that failed to compile/run/return-an-expected-result are marked on another server (on which they are not evaluated in any way) so I can study them later.
Yeah, like I say, Hats off to you for even trying. Sounds like it will be a fun site to run, since I bet you'll be battling against would-be hackers full time from here on out.
Thanks very much, in that case. Honestly I get no kicks out of dealing with the bad guys. I'm hoping the combination of [virtual machine quarantine] and [being low profile enough to stay out of the scope of most bad guys] will see me through.
If I could give one piece of advice to every startup who is worried about server security, it would be this: read the NSA's two page pamphlet for hardening Red Hat Enterprise Linux [1] for a quick overview and then move on to the ~200 page complete guide [2]. The principles are generally applicable to most Linux systems.
The NSA's guide is good enough for low value targets. Besides, these are your tax dollars at work, so you really should make use of them. :-p
EDIT: Just thought of something.... There were several privilege escalation exploits for the kernel made public around this time frame [3]. I'm guessing that's the reason why his kernel was silently upgraded. :-p
It's never too early to educate your employees about social engineering. My startup is small but has been targeted a few times by people trying to social engineer their way into more information.
An employee of one of our direct competitors even called one of my developers to try and extract information - good thing she caught on quick. We eventually tracked a number of attack vectors to either him or his company. They weren't the brightest attacks though (calling her from his work #, emailing her and other employees from email addresses which he'd used online, running port scanners and SQL Injection tests (etc) on our servers from company IP addresses).
Other times people call pretending to be customers, but they don't pass the smell test. We've had a few people call with tons of questions about our tech, capabilities, existing customers, etc, but won't divulge much of their own information or can't properly answer some of the basic questions that our real customers would know.
Early customers are kind of like a job interview, it has to feel right for both sides.
> Early customers are kind of like a job interview, it has to feel right for both sides.
So right. I consulted for a while as a programmer. It took a few times being burnt before I could distinguish between the good and the bad clients, but after that the difference was night and day.
The biggest thing I look for is to be clued in to the feasibility and difficulty of their commission. If someone wants a Facebook clone for $500 and/or finished in 3 weeks, I'd walk away immediately. I guess that's an exaggerated example but if a client is completely non-technical it's almost always too painful to work with them directly.
I don't agree with 'never say no to a customer' in software development work.
Huh? I don't see how this person is a "blackhat". Just because they are playing with your server and trying to find holes doesn't mean they are intending to maliciously exploit them.
I also don't see how it is social engineering. What did the attacker convince you of?
They didn't mention it on the emails at all. It was the contrast between 'my son is named x' and 'i kill u' that scared me. What I should have mentioned in the blog post was that there were other messages they left as comments in the code they submitted.
So at the moment you have to leave an email address to take an open test. (This will change soon to no address required). The guy consistently signed in with a fake email address, email to which was bouncing. I blocked that email address, which amounts to not accepting it to let someone sign in for an open test. This is just a soft warning, of course. So the guy appended a character to the address, and left me the message: "Why have you blocked test@example.com? Too bad codeboffin!" I just don't see why they couldn't have discussed it on email on which they were already in mid-conversation with me.
I presume the social engineering was coming. I would probably have told the person anything he wanted to know about my security setup by then. I don't think I would have been as open to anyone else off the bat.
(cross-posted to the blog): If all the "attacker" did was a #include /dev/random, how is that not something someone in your own QA team would do? Heck, if this was my company, that would be one of the first things I'd do (albeit only on my laptop or my staging environment)...
There may be other details that he hasn't shared, but if this is the "attack," it's pretty benign.
The #include /dev/random hosed the code evaluation task queue and everyone else's code evals started piling up behind it. This behaviour has been fixed now; every submission is tried just once and then the evaluator moves on. The attacker might not have known they could break my system in this way, but they threw in some system("rm -rf /")'s in there which is fairly malicious.
No social engineering took place unless you gave up something you shouldn't have. Someone was friendly in email and was playing around with your code submission system seeing how far he could push it. rm -fr * is a nice joke on you for letting it slide and the #include /dev/random is just funny.
What surprised me is how this one evil guy took the effort to hack his server. Some strange motive is in place. Usually they scan ip addresses and ports and attack servers randomly.
That's neat. What I'd worry about is boot-up time. I let people compile and test their program with an AJAX button in the middle of taking a test. It's already 3-4 seconds before they get the result back, more delay would hurt.
With tasks queuing up as they already are, I might not be able to keep a fresh VM pre-warmed every time, either.
edit: The other factor is of course the prospective cost of a brand new configuration. Unless I find I can't prevent the current config from being breached and ephemeral VMs will fix it.
Yes. Booting up every VM would take too long. Cloning them just before you execute the payload is the way to go.
(At least as long as you have a general purpose operating system. A special paravirtualised would probably not need to make much of a difference between booting and resuming. But that's only a theoretical musing for your circumstances.)
The author of the article had left a default whitelist on the production box that whitelisted /dev/random as being readable. The attacker compiled a program (took a test I'm guessing) that included /dev/random. The author found out based on ip and sheer luck that the attacker was a user of the system that he had corresponded with previously.
The reason I summarized this is because I wasn't totally clear on what codeboff.in was from the article. It makes sense how an attack like this could happen. The author seems to really know his stuff and the attack vector really was a small oversight. It's pretty unsettling to me, though, how persistent/insane this attacker seems.