>... you have to have a vault for things like crashdumps that is so easy to use that people don't get lazy...
Let's assume a crash dump can be megabytes up to gigabytes big.
How could a vault handle this securely?
the moment it is copied from the vault to the developer's computer, you introduce data remanence (undelete from file system).
keeping such coredump purely in RAM makes it accessible on a compromised developer machine (GNU Debugger), and if the developer machine crashes, its coredump contains/wraps the sensitive coredump.
A vault that doesn't allow direct/full coredump download, but allows queries (think "SQL queries against a vault REST API") could still be queried for e.g. "select * from coredump where string like '%secret_key%'".
So without more insight, a coredump vault sounds like security theater which tremendously makes it more difficult for intended purposes.
Everything is imperfect, but where I work crashdumps are uploaded straight to a secure vault and then deleted from the origin system. The dumps are processed, and insensitive data is extracted and published with relatively lenient access controls. Sensitive data, such as raw memory dumps, require a higher tier of permissions. In order to be eligible for that higher tier, your developer machine has to be more locked down than that of people who are not in the secure group. (You also need to have a reason to need more access.)
Given that stack traces, crash addresses, and most register contents are considered to be security insensitive, most people don't really need access to the raw dumps.
It's far from perfect, but it would be unfair to call it "security theater". It seems like a pretty decent balance in practice. Admittedly, we have the slight advantage of several hundred millions installs, so the actual bugs that are causing crashes are likely to happen quite a few times and statistical analysis will often provide better clues than diving deep into an individual crash dump.
> Everything is imperfect, but where I work crashdumps are uploaded straight to a secure vault and then deleted from the origin system. The dumps are processed, and insensitive data is extracted and published with relatively lenient access controls. Sensitive data, such as raw memory dumps, require a higher tier of permissions. In order to be eligible for that higher tier, your developer machine has to be more locked down than that of people who are not in the secure group. (You also need to have a reason to need more access.)
From my understanding, this is more or less how the Microsoft system was designed with credential scanning and redaction over coredumps, but a chain of bugs and negligence broke the veil.
While your points are all valid theoretically, keeping stuff off of developer filesystems can still help a lot practically.
This attacker probably (it's unclear, since the write-up doesn't tell us) scanned compromised machines for key material using some kind of dragnet scanning tool. If the data wasn't on the compromised filesystem, they wouldn't have found it. Even though perhaps in theory they could have sat on a machine with debug access (depending on the nature of the compromise, this is even a stretch - reading another process's RAM usually requires much higher privilege than filesystem access) and obtained a core dump from RAM.
Security is always a tension between the theoretical and the practical and I think "putting crash dumps in an easy-to-use place that isn't a developer's Downloads folder" isn't a bad idea.
Ephemeral compute/VM/debug environment with restricted access.
Tear down the environment after the debugging is done.
Keeping the crash dumps in a vault presumably allows more permission/control that an internal issue tracker (usually anyone can access the issue tracker). At least a vault can apply RBAC or even time based policies so these things aren't laying around forever.
Let's assume a crash dump can be megabytes up to gigabytes big.
How could a vault handle this securely?
the moment it is copied from the vault to the developer's computer, you introduce data remanence (undelete from file system).
keeping such coredump purely in RAM makes it accessible on a compromised developer machine (GNU Debugger), and if the developer machine crashes, its coredump contains/wraps the sensitive coredump.
A vault that doesn't allow direct/full coredump download, but allows queries (think "SQL queries against a vault REST API") could still be queried for e.g. "select * from coredump where string like '%secret_key%'".
So without more insight, a coredump vault sounds like security theater which tremendously makes it more difficult for intended purposes.