> why would you put my tens of thousands of users at risk, instead of emailing me and have the vulnerability fixed in an hour before disclosing
You've got it backwards.
The vuln exists, so the users are already at risk; you don't know who else knows about the vuln, besides the people who reported it.
Disclosing as soon as known means your customers can decide for themselves what action they want to take. Maybe they wait for you, maybe they kill the service temporarily, maybe they kill it permanently. That's their choice to make.
Denying your customers information until you've had time to fix the vuln, is really just about taking away their agency in order to protect your company's bottom line, by not letting them know they're at risk until you can say, "but we fixed it already, so you don't need to stop using us to secure yourself, just update!"
You're making an assumption that doesn't match reality - vulnerability discovery doesn't work like some efficient market. Yes, intelligence agencies and sophisticated criminal groups might find 0-days, but they typically target selectively, not deploying exploits universally.
The real threat comes from the vast number of opportunistic attackers who lack the skills to discover vulnerabilities themselves but are perfectly capable of weaponizing public disclosures and proof-of-concepts. These bottom-feeders represent a much larger attack surface that only materializes after public disclosure.
Responsible disclosure gives vendors time to patch before this larger wave of attackers gets access to the vulnerability information. It's not about protecting company reputation - it's about minimizing the window of mass exploitation.
Timing the disclosure to match the fix release is actually the most practical approach for everyone involved. It eliminates the difficult choice customers would otherwise face - either disrupt their service entirely or knowingly remain vulnerable.
Most organizations simply can't afford the downtime from abruptly cutting off a service, nor can they accept the risk of continuing with a known vulnerability. Providing the fix simultaneously with disclosure allows for orderly patch deployment without service interruption.
This coordinated approach minimizes disruption while still addressing the security issue - a balanced solution that protects both the security and continuity needs of end users.
I understand the arguments for the current system, I just don't agree that disruption is worse than loss of agency. Your position inevitably ends up arguing for a paternalistic approach, as you are when you say
> It eliminates the difficult choice customers would otherwise face - either disrupt their service entirely or knowingly remain vulnerable.
You decided they are better off not having to make that choice, so you make it for them whether they like it or not.
In fact, you made the worst choice for them, because you chose that they'd remain unknowingly vulnerable, so they can't even put in temporary mitigations or extra monitoring, or know to be on the lookout for anything strange.
> Most organizations simply can't afford the downtime from abruptly cutting off a service, nor can they accept the risk of continuing with a known vulnerability.
Now this is an interesting part, because the first half is true depending on the service, but bad (that's a BCDR or internet outage issue waiting to happen), and the second half is just wrong (show me a company that doesn't know and accept that they have past-SLA vulns unpatched, criticals included, and I'll show you a company that's lying either to themselves or their customers).
> This coordinated approach minimizes disruption while still addressing the security issue - a balanced solution that protects both the security and continuity needs of end users.
This is not a balanced approach, this is a lowest-common-denominator approach that favors service providers over service users. You don't know if it protects someone's security needs, because people have different security needs: a journalist being targeted by a state actor can have the same iphone as someone's retired grandma, or infotainment system, or home assistant, etc.
I've managed bug bounty and unpaid disclosure programs, professionally, and I know firsthand that it's the company's interests that responsible disclosure serves, first and foremost.
Let’s imagine you found how to steal funds from a bank, best is to let them know that you are concerned (as a customer) for the safety of your own funds.
If they do nothing after a reasonable amount of time, escalate to regulators or change bank. Then once they release information that some processes are changed: “thanks to XXX working at YYY for helping us during it”. You win, they win, clients win, everybody wins.
Unwanted public disclosure directly leads to public exploitation, there is nothing good at all about it.
For example, there is a RCE in Discord (totally statistically certain due to the rendering engine, just not public yet), and this is going to be exploited only if someone shares the technical details.
If you don’t disclose it, it’s not like someone else will discover it tomorrow. It’s possible, but not more likely than it was yesterday. If you disclose it, you make sure that everybody with malicious intent knows about it.
A middle ground: announce that Discord is insecure and you’ve found a zero-day. Perhaps a trusted 3rd party exists that can attest publicly (Mitre?) after you show a demo.
Then customers are aware, Discord is pressured to act/shamed, and then you proceed with your private disclosure with a window.
Yep. People keep pushing this false dichotomy that it's either company-directed 'responsible disclosure', or it's "release full working POC and complete writeup publicly, immediately", and there's no middle ground.
Yes, limited disclosure will make people start hunting for the vuln, but it's still more than enough time for me to revoke an API key, lock down an internet-facing service, turn off my Alexa (no, I don't/won't own one), uninstall the app, etc. And it's better than me not knowing, and someone is intruding into my system in the meantime.
Knowing a half-truth is as bad as knowing nothing. Half the time I will do useless mitigations because actually I would have been unaffected. The other half I will do the wrong thing because of incomplete reporting.
> Knowing a half-truth is as bad as knowing nothing.
This is assuming the perfect user who even understands the bug and the full impact. Everyone is working with half-truths already, in which case by your logic they may as well know nothing.
That is useless, because of the tons of sleazy CVE-collectors. They will always announce the next heartbleed, details soon. When the details are out, total nothingburger, useless mitigation recommendations, incomplete report, misreported scope, different attack vectors, I've seen everything. It only feeds the CVE hype cycle to no use of the customers, victims and public.
You report that to the bank, the bank pays off you and the robbers to keep things quiet. 5 years later, things are discovered and you go to jail for aiding and abetting.
Or you report immediately to the press, press reports, police secures bank building, investigates sloppy practices, customers win, you are a hero, inept banksters and robbers go to jail.
Increasing the chance of a bad actor actually doing something with a vulnerability seems bad, actually. You're effectively shifting responsibility to consumers, who are probably not going to see a CVE for one of the dozens of softwares they use every day.
> You're effectively shifting responsibility to consumers, who are probably not going to see a CVE for one of the dozens of softwares they use every day.
Which is again, a problem created by the companies themselves. The way this should work is that the researcher discloses to the company, and the company reaches out to and informs their customers immediately. Then they fix it.
But instead companies refuse to tell their customers when they're at risk, and make it out to be the researchers that are endangering people, when those researchers don't wait on an arbitrary, open-ended future date.
> Increasing the chance of a bad actor actually doing something with a vulnerability seems bad, actually.
Unless you know who knows what already, this is unprovable supposition (it could already be being exploited in the wild), and the arguments about whether POC code is good or bad is well tread, and covers this question.
You are just making the argument that obscurity is security, and it's not.
> The way this should work is that the researcher discloses to the company, and the company reaches out to and informs their customers immediately. Then they fix it.
If that was common practice, bad actors would make sure to be a registered customer of all interesting targets, so that they get informed early about vulnerabilities before there is a fix. And it would create a black market for that information.
When someone gets the information “Asus BIOS has an RCE vulnerability related to driver installation”, they’ll be able to figure out the details quickly with high probability, like OP did.
You are shopping at a store along with some other customers. When entering the store, you notice that an employee of the store has left a large knife outside, under a trashcan. A shady character is wandering around the store, looking for someone to steal from, but hasn't figured out the right angle of attack yet. At some point, you (ever the responsible citizen) stand up on a table in the store and yell "Hey! Just wanted to let everyone know that there is a large, scary looking knife under the trash can outside. You have been warned." You then climb down from the table and leave the store. Knives are dangerous, after all. Immediately after your announcement the shady character goes and grabs the knife, which they then use to stab a customer on their way out of the store and steal their stuff. Unfortunately the customer didn't hear your announcement about the impending danger because they were in the toilet at the time.
Whew, thank god for public disclosure with no prior warning to the people who would've been best equipped to retrieve their knife.
---
This was clearly not the best way to handle the situation.
Sure, you didn't know that the thief was unaware of the knife before your announcement, but he sure as shit was aware afterwards. You not knowing what they know is not a good reason to indiscriminately yell to no one in particular.
I did not make the argument that obscurity is security. The knife being under a trashcan is a risk and should be addressed by management. But that doesn't mean non-obscurity automatically improves security.
You are shopping at a store along with some other customers. When entering the store, you notice a gun laying on the ground by the door. You keep coming back every week, pointing it out, asking if that's intended or not.
They continue to ignore you, or explain how it's intended; a good thing even!
Eventually someone with malicious intent also sees the gun, picks it up, shoots a fellow customer, puts it back where it was, and walks off.
By the next day, miraculously, management will have found the time and resources to remove the gun.
Agreed, that is what often happens. But after seeing this pattern before, that does not mean the solution going forward is to yell "hey everyone there is a gun" and hope management gets to it before the person with malicious intent.
Sure, maybe management will ignore you if you tell them about the gun privately. At that point, feel free to disclose publicly. But they are guaranteed to not do anything if they don't know about it and you don't tell them (before telling everyone else including bad actors).
A better analogy would be if you see a bunch of people walking around in faulty stab vests, and you tell them that the vests are faulty before they are recalled and replaced by the company. In which case, telling everyone those vests are actually not going to stop a knife, is a very good thing to do.
> I did not make the argument that obscurity is security... But that doesn't mean non-obscurity automatically improves security.
... egad. Yes, having information doesn't mean people will do the right thing with it, but you're not everyone's mommy/god/guardian. People should have the choice themselves about what actions they want to take, and what's in their own best interests.
And obscuring the information that they need to make that choice, in the name of not making them less secure, is, ipso facto, asserting that the obscuring is keeping them more secure than they otherwise might be.
So yes, you absolutely are arguing for obscurity as security.
Sure, we can run with your analogy. So you make everyone aware that the stab vests are faulty. One of the people you make aware of this fact is a thief with a knife, who previously wasn't gonna take the risk on robbing anyone, since he only had a knife (not a gun) and everyone was wearing stab proof vests. But now he knows, so he goes for it and stabs someone. You are partially responsible for this outcome in this hypothetical scenario, as the thief didn't know beforehand about the defect and the only reason he ended up stabbing someone was due to this knowledge. Again, you not knowing whether or not the thief already knows does not excuse you if he did not and now does through your actions.
I'm arguing that unveiling the obscurity can lead to attacks that wouldn't have happened otherwise, and you are partially to blame for those if they happen (which is true). I am not saying it was "more secure" before the disclosure. Just that, in the world afterwards, you must take responsibility for everyone knowing, including people who did not know before and abuse that knowledge.
> But now he knows, so he goes for it and stabs someone.
Except his old knife he already had with him isn't made for exploiting the flaw in the vest, so it doesn't work. He needs to go home and build a new one, and the people in the mall can go home before he comes back, now that they know their vests are flawed. Otherwise, someone who comes in and is aware of the flaw when the users are not, can stab everyone, and they'd have no clue they were vulnerable.
In real-world terms, the kind of mass-exploitation that people use to fear monger about disclosure already happens everyday, and most people don't notice. The script kid installing a monero miner on your server should not be driving the conversation, it should be the IC spook recording a journalist/ dissident/ etc.
> Just that, in the world afterwards, you must take responsibility for everyone knowing, including people who did not know before and abuse that knowledge.
This is just a generalized argument for censorship of knowledge. Yes, humans can use knowledge to do bad things. No, that does not justify hiding information. No, that does not make librarians/ researchers/ teachers responsible for the actions of those that learn from them.
> Except his old knife he already had with him isn't made for exploiting the flaw in the vest, so it doesn't work.
This seems like an unnecessary constraint to bolster your point instead of actually addressing what the other person is saying.
In this analogy, why can’t the old knife exploit the flaw? If the problem with the vest allows a sharp implement through the material when inserted at the correct angle or in the correct place, any sharp object should do.
To bring this back to the real world, this is all unfolding in virtual/digital spaces. The attacker doesn’t need to physically go anywhere, nor can potential victims easily leave the store in many cases. And the attacker often needs very little time to start causing harm thanks to the landscape of tools available today.
Why should it work that way? Disclosing the vuln before fixing it seems like a surefire way for my mum to lose her life's savings. Why do you hate my mum so much?
Why do the companies that make the software hate your mom so much they push out release after release of shit? We're all fine with these developers crapping on the floor as long as we give them 30 days to clean up their steaming pile.
If instead every release was capable of instantly ruining someone's life, maybe we'd be more capable of releasing secure software and judging what software is secure.
I disagree. The vast majority of script kiddies don't know about the zero day.
Instead of just one bad actor using that vulnerability on Andrew select targets, your proposal will have a few tens of thousands bots performing drive by attacks on millions of victims.
You've got it backwards.
The vuln exists, so the users are already at risk; you don't know who else knows about the vuln, besides the people who reported it.
Disclosing as soon as known means your customers can decide for themselves what action they want to take. Maybe they wait for you, maybe they kill the service temporarily, maybe they kill it permanently. That's their choice to make.
Denying your customers information until you've had time to fix the vuln, is really just about taking away their agency in order to protect your company's bottom line, by not letting them know they're at risk until you can say, "but we fixed it already, so you don't need to stop using us to secure yourself, just update!"