Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> but presumably the attacker here had to know about the crash, and the layout of the crash dump

If I were an advanced persistent threat attacker working for China who had compromised Microsoft's internal network via employee credentials (and I'm not), the first thing I'd do is figure out where they keep the crash logs and quietly exfil them, alongside the debugging symbols.

Often, these are not stored securely enough relative to their actual value. Having spent some time at a FAANG, every single new hire, with the exception of those who have worked in finance or corporate regulation, assumes you can just glue crash data onto the bugtracker (that's what bugtrackers are for, tracking bugs, which includes reproducing them, right?). You have to detrain them of that and you have to have a vault for things like crashdumps that is so easy to use that people don't get lazy and start circumventing your protections because their job is to fix bugs and you've made their job harder.

With a compromised engineer's account, we can assume the attacker at least has access to the bugtracker and probably the ability to acquire or generate debug symbols for a binary. All that's left then is to wait for one engineer to get sloppy and paste a crashdump as an attachment on a bug, then slurp it before someone notices and deletes it (assuming they do; even at my big scary "We really care about user privacy" corp, individual engineers were loathe to make a bug harder to understand by stripping crashlogs off of it unless someone in security came in and whipped them. Proper internal opsec can really slow down development here).



your statement:

> but presumably the attacker here had to know about the crash, and the layout of the crash dump

another statement from the article:

> Our credential scanning methods did not detect its presence (this issue has been corrected).

The article does not give any timeline when things happened.

Imagine the following timeline:

- hacker gets coredump in 2021, doesn't know that it contains valuable credentials.

- For data retention policy reasons, Microsoft deletes their copy of the coredump — but hacker just keeps it.

- Microsoft updates its credential scanning methods.

- Microsoft runs updated credential software over their reduced archive (retention policy) of coredumps. As that particular coredump doesn't exist anymore at Microsoft, they are not aware of the issue.

- hacker get scanner update.

- hacker runs updated credential scanner software over their archive of coredumps. Jackpot.


>... you have to have a vault for things like crashdumps that is so easy to use that people don't get lazy...

Let's assume a crash dump can be megabytes up to gigabytes big.

How could a vault handle this securely?

the moment it is copied from the vault to the developer's computer, you introduce data remanence (undelete from file system).

keeping such coredump purely in RAM makes it accessible on a compromised developer machine (GNU Debugger), and if the developer machine crashes, its coredump contains/wraps the sensitive coredump.

A vault that doesn't allow direct/full coredump download, but allows queries (think "SQL queries against a vault REST API") could still be queried for e.g. "select * from coredump where string like '%secret_key%'".

So without more insight, a coredump vault sounds like security theater which tremendously makes it more difficult for intended purposes.


Everything is imperfect, but where I work crashdumps are uploaded straight to a secure vault and then deleted from the origin system. The dumps are processed, and insensitive data is extracted and published with relatively lenient access controls. Sensitive data, such as raw memory dumps, require a higher tier of permissions. In order to be eligible for that higher tier, your developer machine has to be more locked down than that of people who are not in the secure group. (You also need to have a reason to need more access.)

Given that stack traces, crash addresses, and most register contents are considered to be security insensitive, most people don't really need access to the raw dumps.

It's far from perfect, but it would be unfair to call it "security theater". It seems like a pretty decent balance in practice. Admittedly, we have the slight advantage of several hundred millions installs, so the actual bugs that are causing crashes are likely to happen quite a few times and statistical analysis will often provide better clues than diving deep into an individual crash dump.


> Everything is imperfect, but where I work crashdumps are uploaded straight to a secure vault and then deleted from the origin system. The dumps are processed, and insensitive data is extracted and published with relatively lenient access controls. Sensitive data, such as raw memory dumps, require a higher tier of permissions. In order to be eligible for that higher tier, your developer machine has to be more locked down than that of people who are not in the secure group. (You also need to have a reason to need more access.)

From my understanding, this is more or less how the Microsoft system was designed with credential scanning and redaction over coredumps, but a chain of bugs and negligence broke the veil.


While your points are all valid theoretically, keeping stuff off of developer filesystems can still help a lot practically.

This attacker probably (it's unclear, since the write-up doesn't tell us) scanned compromised machines for key material using some kind of dragnet scanning tool. If the data wasn't on the compromised filesystem, they wouldn't have found it. Even though perhaps in theory they could have sat on a machine with debug access (depending on the nature of the compromise, this is even a stretch - reading another process's RAM usually requires much higher privilege than filesystem access) and obtained a core dump from RAM.

Security is always a tension between the theoretical and the practical and I think "putting crash dumps in an easy-to-use place that isn't a developer's Downloads folder" isn't a bad idea.


Ephemeral compute/VM/debug environment with restricted access.

Tear down the environment after the debugging is done.

Keeping the crash dumps in a vault presumably allows more permission/control that an internal issue tracker (usually anyone can access the issue tracker). At least a vault can apply RBAC or even time based policies so these things aren't laying around forever.


Oh, yeah, the amount of times I see HAR file with things in it just floating in JIRA is not fun.


Why does an attacker need symbols?


They're not needed, but they will speed up comprehension of the contents of the program.

... but they're definitely not strictly necessary.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: