You're making an assumption that doesn't match reality - vulnerability discovery doesn't work like some efficient market. Yes, intelligence agencies and sophisticated criminal groups might find 0-days, but they typically target selectively, not deploying exploits universally.
The real threat comes from the vast number of opportunistic attackers who lack the skills to discover vulnerabilities themselves but are perfectly capable of weaponizing public disclosures and proof-of-concepts. These bottom-feeders represent a much larger attack surface that only materializes after public disclosure.
Responsible disclosure gives vendors time to patch before this larger wave of attackers gets access to the vulnerability information. It's not about protecting company reputation - it's about minimizing the window of mass exploitation.
Timing the disclosure to match the fix release is actually the most practical approach for everyone involved. It eliminates the difficult choice customers would otherwise face - either disrupt their service entirely or knowingly remain vulnerable.
Most organizations simply can't afford the downtime from abruptly cutting off a service, nor can they accept the risk of continuing with a known vulnerability. Providing the fix simultaneously with disclosure allows for orderly patch deployment without service interruption.
This coordinated approach minimizes disruption while still addressing the security issue - a balanced solution that protects both the security and continuity needs of end users.
I understand the arguments for the current system, I just don't agree that disruption is worse than loss of agency. Your position inevitably ends up arguing for a paternalistic approach, as you are when you say
> It eliminates the difficult choice customers would otherwise face - either disrupt their service entirely or knowingly remain vulnerable.
You decided they are better off not having to make that choice, so you make it for them whether they like it or not.
In fact, you made the worst choice for them, because you chose that they'd remain unknowingly vulnerable, so they can't even put in temporary mitigations or extra monitoring, or know to be on the lookout for anything strange.
> Most organizations simply can't afford the downtime from abruptly cutting off a service, nor can they accept the risk of continuing with a known vulnerability.
Now this is an interesting part, because the first half is true depending on the service, but bad (that's a BCDR or internet outage issue waiting to happen), and the second half is just wrong (show me a company that doesn't know and accept that they have past-SLA vulns unpatched, criticals included, and I'll show you a company that's lying either to themselves or their customers).
> This coordinated approach minimizes disruption while still addressing the security issue - a balanced solution that protects both the security and continuity needs of end users.
This is not a balanced approach, this is a lowest-common-denominator approach that favors service providers over service users. You don't know if it protects someone's security needs, because people have different security needs: a journalist being targeted by a state actor can have the same iphone as someone's retired grandma, or infotainment system, or home assistant, etc.
I've managed bug bounty and unpaid disclosure programs, professionally, and I know firsthand that it's the company's interests that responsible disclosure serves, first and foremost.
Let’s imagine you found how to steal funds from a bank, best is to let them know that you are concerned (as a customer) for the safety of your own funds.
If they do nothing after a reasonable amount of time, escalate to regulators or change bank. Then once they release information that some processes are changed: “thanks to XXX working at YYY for helping us during it”. You win, they win, clients win, everybody wins.
Unwanted public disclosure directly leads to public exploitation, there is nothing good at all about it.
For example, there is a RCE in Discord (totally statistically certain due to the rendering engine, just not public yet), and this is going to be exploited only if someone shares the technical details.
If you don’t disclose it, it’s not like someone else will discover it tomorrow. It’s possible, but not more likely than it was yesterday. If you disclose it, you make sure that everybody with malicious intent knows about it.
A middle ground: announce that Discord is insecure and you’ve found a zero-day. Perhaps a trusted 3rd party exists that can attest publicly (Mitre?) after you show a demo.
Then customers are aware, Discord is pressured to act/shamed, and then you proceed with your private disclosure with a window.
Yep. People keep pushing this false dichotomy that it's either company-directed 'responsible disclosure', or it's "release full working POC and complete writeup publicly, immediately", and there's no middle ground.
Yes, limited disclosure will make people start hunting for the vuln, but it's still more than enough time for me to revoke an API key, lock down an internet-facing service, turn off my Alexa (no, I don't/won't own one), uninstall the app, etc. And it's better than me not knowing, and someone is intruding into my system in the meantime.
Knowing a half-truth is as bad as knowing nothing. Half the time I will do useless mitigations because actually I would have been unaffected. The other half I will do the wrong thing because of incomplete reporting.
> Knowing a half-truth is as bad as knowing nothing.
This is assuming the perfect user who even understands the bug and the full impact. Everyone is working with half-truths already, in which case by your logic they may as well know nothing.
That is useless, because of the tons of sleazy CVE-collectors. They will always announce the next heartbleed, details soon. When the details are out, total nothingburger, useless mitigation recommendations, incomplete report, misreported scope, different attack vectors, I've seen everything. It only feeds the CVE hype cycle to no use of the customers, victims and public.
You report that to the bank, the bank pays off you and the robbers to keep things quiet. 5 years later, things are discovered and you go to jail for aiding and abetting.
Or you report immediately to the press, press reports, police secures bank building, investigates sloppy practices, customers win, you are a hero, inept banksters and robbers go to jail.
The real threat comes from the vast number of opportunistic attackers who lack the skills to discover vulnerabilities themselves but are perfectly capable of weaponizing public disclosures and proof-of-concepts. These bottom-feeders represent a much larger attack surface that only materializes after public disclosure.
Responsible disclosure gives vendors time to patch before this larger wave of attackers gets access to the vulnerability information. It's not about protecting company reputation - it's about minimizing the window of mass exploitation.
Timing the disclosure to match the fix release is actually the most practical approach for everyone involved. It eliminates the difficult choice customers would otherwise face - either disrupt their service entirely or knowingly remain vulnerable.
Most organizations simply can't afford the downtime from abruptly cutting off a service, nor can they accept the risk of continuing with a known vulnerability. Providing the fix simultaneously with disclosure allows for orderly patch deployment without service interruption.
This coordinated approach minimizes disruption while still addressing the security issue - a balanced solution that protects both the security and continuity needs of end users.