Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is known as insider threat detection / User Behavior Analytics (UBA). It could also be considered part of Data Loss Prevention (DLP). Insider threats are probably the hardest things to reliably detect on a large corporate network (compared to all of the other types of information threats), especially at a company where most of the employees are very active users of technology. The field is still in its infancy, with lots of cool-looking "AI-driven / ML-powered / buzzword-optimized" products from startups which typically end up generating an absurd amount of anomaly detections per day, usually with a 99.9% false positive rate. Of course I'm generalizing and I imagine some companies have implemented a fairly effective UBA program, but I think they're rare.

It's just trying to find a needle in an extremely large haystack. When you're dealing with technology departments, normal behavior can easily be a modest amount of network traffic for a few days followed by a huge burst of downloads and uploads from/to internal services and databases and cloud storage and any number of things. Suspicious website browsing could be innocuous research and curiosity. That personal USB drive plugged in is probably some developer with a deadline who never got around to requesting a corporate drive and can't wait a few days for it to be approved and needs to physically transfer files ASAP.

It's just not an easy problem. There are probably hundreds of other instances of an Apple employee not looking at any prototype data for months and suddenly poring over tons of it. Maybe they're preparing for a presentation or a new project. Adding lots of red tape and restrictions and wasting time investigating employees who've done nothing wrong (or perhaps who violated policy but with no real bad intent or serious negligence) and telling people they can't do certain things which make their job more efficient takes a huge toll on everyone. It's a necessary evil, but trade-offs always have to be considered. Apple wants their autonomous car program developed as quickly as possible, and the more they restrict access and require lengthy approval processes, the slower things will get done.

And fundamentally, unless you're in a weird situation, probably ~0.1% of your employees are insider threats, and probably ~0.01% are significant insider threats which could actually affect your business. The odds are stacked against you.

Occasionally you'll run across a smoking gun that's easy to detect with basic logic like "email sent to webmail account with no subject and over 6 attachments", but if you're dealing with a smart insider threat - especially one working on behalf of a superpower government's intelligence apparatus - you're not going to find something so blatant. I have sometimes run across things like that, but it's usually something gray like a developer emailing themselves some code so they can continue to work on it at home. The worst thing I've ever found was a salesperson emailing themselves proprietary leads/contact lists shortly before their resignation date. A spy is never going to get caught from such low-hanging fruit detections.

You have to start with the basics: strict policy guidelines, least privilege principle, log everything, a good team of people to investigate anomalies and write up employees who are violating policy, and then finally you can shell out a lot of resources on automated detection and baseline and tune for a long time until you have a manageable number of dashboards and reports and alerts that the team can respond to. Apple will presumably restrict access more carefully after this incident, and implement some new statistical anomaly detection, but insider threats will always be hard to detect.

Dabbling in UBA also made me realize some of the issues faced by agencies like NSA. I'm sure they have strong policies against unauthorized data access (like looking up information about romantic partners), fully intend to enforce them, and have lots of manual and automated detections, but in reality the amount of data and number of daily data accesses is probably way too high to consistently catch bad actors. I think that's one of many strong practical arguments to not let them have have easy access to such a big trove of sensitive data, even if you make the assumption they're behaving completely ethically and responsibly.



Yes, from my very limited to exposure to this (not in SV, but in healthcare), these are two key points:

absurd amount of anomaly detections per day, usually with a 99.9% false positive rate

Adding lots of red tape and restrictions and wasting time investigating employees who've done nothing wrong

What I've seen/heard about is that you end up with some EVP pissed off that IT/SEC is bothering their people – rightly or wrongly, it'll inevitably get used an excuse for why something is late. So the EVP (virtually) marches into the office of the IT/SEC director and issues an edict that everyone in <this super special department> are too important to be bothered and any access restrictions or investigations affecting <the department> must get prior approval from the EVP's office. That's of course a huge pain in the ass, which results in that department effectively being exempt, i.e., a perfect place for an internal spy.

The IT/SEC director, often several rungs down from the angry EVP, usually has the authority to stand up to the EVP, technically, but that is a risky move, can easily start a turf war.

So, for these programs to be effective, they must get buy-in from the absolute highest levels with no exemptions, which is not easy in the highly political world of huge organizations.


As someone who worked in a special unit in a big healthcare company, this hits really close to home. Our BU sponsor got us an outside internet connection in our building so we would have unfettered internet access. That would've been the perfect spot to offload documents because you're using a company computer on a non-monitored internet connection and our department had no oversight.

In hindsight, this is very scary given that I had access to production systems with loads of PHI, PII, etc. with no censoring or filtering in place.


And here I always thought it's EVPs that come up with those ridiculous security measures, not IT/SEC guys, and that's the lower-level managers that have to fight to actually get something done. At one of my previous jobs, it was our direct boss that fought tooth and nail to shield our programming teams from the consequences of the whole corporation deciding to level up some more in ISO standards...

Don't get me wrong. I understand the need for security measures in a company. But there must be some middle ground - some way of securing data and networks without incurring a 1000% penalty on productivity for all your programming teams.


Yeah, I've been in environments where they completely locked down internet access, and we had to "fight tooth and nail" to get an exemption for a handful of sites like StackOverflow. I agree it can be a huge productivity problem.

Again, my experience is very limited compared to many, but the best mix I've seen is programmers had basically wide open internet access BUT everything was still logged. And they must have had some type of automated review. A coworker was planning her wedding, and while sitting on conference calls, browsed around a bunch of wedding sites. She got an email from IT asking about that. (It wasn't a big deal, just embarrassing.) Also, certain categories of data could not be copied to a local computer; they had to be manipulated on a server. Technically you could transfer data from the server (again logged), but it was a firing offense if you were found with sensitive data from on your laptop.


>The field is still in its infancy

translation : it doesn't work


Basically yes, but everything has to start somewhere. I imagine it'll always be a hard problem, but it'll improve over time.


><paragraph starting with "It's just not an easy problem">

I guess that's the thing, a lot of us who are ignorant about such things have the benefit of hindsight now that we know he did it, and detection beforehand is just not that easy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: