Hacker News new | past | comments | ask | show | jobs | submit login

"Responsible" disclosure is paradoxically named because actually it is completely irresponsible. The vast majority of corporations handle disclosures badly in that they do not fix in time (i.e. a week), do not attribute properly, do not inform their users and do not learn from their mistakes. Irresponsibly delayed limited disclosure reinforces those behaviors.

The actually responsible thing to do is to disclose immediately, fully and publically (and maybe anonymously to protect yourself). Only after the affected company has repeatedly demonstrated that they do react properly, they might earn the right for a very time-limited heads-up of say 5 work days or something.

That irresponsibly delayed limited disclosure is even called "responsible disclosure" is an instance of newspeak.




I make software. If you discover a vulnerability, why would you put my tens of thousands of users at risk, instead of emailing me and have the vulnerability fixed in an hour before disclosing?

I get that companies sit on vulnerabilities, but isn't fair warning... fair?


> why would you put my tens of thousands of users at risk, instead of emailing me and have the vulnerability fixed in an hour before disclosing

You've got it backwards.

The vuln exists, so the users are already at risk; you don't know who else knows about the vuln, besides the people who reported it.

Disclosing as soon as known means your customers can decide for themselves what action they want to take. Maybe they wait for you, maybe they kill the service temporarily, maybe they kill it permanently. That's their choice to make.

Denying your customers information until you've had time to fix the vuln, is really just about taking away their agency in order to protect your company's bottom line, by not letting them know they're at risk until you can say, "but we fixed it already, so you don't need to stop using us to secure yourself, just update!"


You're making an assumption that doesn't match reality - vulnerability discovery doesn't work like some efficient market. Yes, intelligence agencies and sophisticated criminal groups might find 0-days, but they typically target selectively, not deploying exploits universally.

The real threat comes from the vast number of opportunistic attackers who lack the skills to discover vulnerabilities themselves but are perfectly capable of weaponizing public disclosures and proof-of-concepts. These bottom-feeders represent a much larger attack surface that only materializes after public disclosure.

Responsible disclosure gives vendors time to patch before this larger wave of attackers gets access to the vulnerability information. It's not about protecting company reputation - it's about minimizing the window of mass exploitation.

Timing the disclosure to match the fix release is actually the most practical approach for everyone involved. It eliminates the difficult choice customers would otherwise face - either disrupt their service entirely or knowingly remain vulnerable.

Most organizations simply can't afford the downtime from abruptly cutting off a service, nor can they accept the risk of continuing with a known vulnerability. Providing the fix simultaneously with disclosure allows for orderly patch deployment without service interruption.

This coordinated approach minimizes disruption while still addressing the security issue - a balanced solution that protects both the security and continuity needs of end users.


I understand the arguments for the current system, I just don't agree that disruption is worse than loss of agency. Your position inevitably ends up arguing for a paternalistic approach, as you are when you say

> It eliminates the difficult choice customers would otherwise face - either disrupt their service entirely or knowingly remain vulnerable.

You decided they are better off not having to make that choice, so you make it for them whether they like it or not.

In fact, you made the worst choice for them, because you chose that they'd remain unknowingly vulnerable, so they can't even put in temporary mitigations or extra monitoring, or know to be on the lookout for anything strange.

> Most organizations simply can't afford the downtime from abruptly cutting off a service, nor can they accept the risk of continuing with a known vulnerability.

Now this is an interesting part, because the first half is true depending on the service, but bad (that's a BCDR or internet outage issue waiting to happen), and the second half is just wrong (show me a company that doesn't know and accept that they have past-SLA vulns unpatched, criticals included, and I'll show you a company that's lying either to themselves or their customers).

> This coordinated approach minimizes disruption while still addressing the security issue - a balanced solution that protects both the security and continuity needs of end users.

This is not a balanced approach, this is a lowest-common-denominator approach that favors service providers over service users. You don't know if it protects someone's security needs, because people have different security needs: a journalist being targeted by a state actor can have the same iphone as someone's retired grandma, or infotainment system, or home assistant, etc.

I've managed bug bounty and unpaid disclosure programs, professionally, and I know firsthand that it's the company's interests that responsible disclosure serves, first and foremost.


Let’s imagine you found how to steal funds from a bank, best is to let them know that you are concerned (as a customer) for the safety of your own funds.

If they do nothing after a reasonable amount of time, escalate to regulators or change bank. Then once they release information that some processes are changed: “thanks to XXX working at YYY for helping us during it”. You win, they win, clients win, everybody wins.

Unwanted public disclosure directly leads to public exploitation, there is nothing good at all about it.

For example, there is a RCE in Discord (totally statistically certain due to the rendering engine, just not public yet), and this is going to be exploited only if someone shares the technical details.

If you don’t disclose it, it’s not like someone else will discover it tomorrow. It’s possible, but not more likely than it was yesterday. If you disclose it, you make sure that everybody with malicious intent knows about it.


A middle ground: announce that Discord is insecure and you’ve found a zero-day. Perhaps a trusted 3rd party exists that can attest publicly (Mitre?) after you show a demo.

Then customers are aware, Discord is pressured to act/shamed, and then you proceed with your private disclosure with a window.


Yep. People keep pushing this false dichotomy that it's either company-directed 'responsible disclosure', or it's "release full working POC and complete writeup publicly, immediately", and there's no middle ground.

Yes, limited disclosure will make people start hunting for the vuln, but it's still more than enough time for me to revoke an API key, lock down an internet-facing service, turn off my Alexa (no, I don't/won't own one), uninstall the app, etc. And it's better than me not knowing, and someone is intruding into my system in the meantime.


Knowing a half-truth is as bad as knowing nothing. Half the time I will do useless mitigations because actually I would have been unaffected. The other half I will do the wrong thing because of incomplete reporting.


> Knowing a half-truth is as bad as knowing nothing.

This is assuming the perfect user who even understands the bug and the full impact. Everyone is working with half-truths already, in which case by your logic they may as well know nothing.


This is true of even disclosures with all information available.

I can't count how many people did incorrect or unnecessary fixes for log4shell, even months after it was disclosed.


That is useless, because of the tons of sleazy CVE-collectors. They will always announce the next heartbleed, details soon. When the details are out, total nothingburger, useless mitigation recommendations, incomplete report, misreported scope, different attack vectors, I've seen everything. It only feeds the CVE hype cycle to no use of the customers, victims and public.


You report that to the bank, the bank pays off you and the robbers to keep things quiet. 5 years later, things are discovered and you go to jail for aiding and abetting.

Or you report immediately to the press, press reports, police secures bank building, investigates sloppy practices, customers win, you are a hero, inept banksters and robbers go to jail.


Increasing the chance of a bad actor actually doing something with a vulnerability seems bad, actually. You're effectively shifting responsibility to consumers, who are probably not going to see a CVE for one of the dozens of softwares they use every day.


> You're effectively shifting responsibility to consumers, who are probably not going to see a CVE for one of the dozens of softwares they use every day.

Which is again, a problem created by the companies themselves. The way this should work is that the researcher discloses to the company, and the company reaches out to and informs their customers immediately. Then they fix it.

But instead companies refuse to tell their customers when they're at risk, and make it out to be the researchers that are endangering people, when those researchers don't wait on an arbitrary, open-ended future date.

> Increasing the chance of a bad actor actually doing something with a vulnerability seems bad, actually.

Unless you know who knows what already, this is unprovable supposition (it could already be being exploited in the wild), and the arguments about whether POC code is good or bad is well tread, and covers this question.

You are just making the argument that obscurity is security, and it's not.


> The way this should work is that the researcher discloses to the company, and the company reaches out to and informs their customers immediately. Then they fix it.

If that was common practice, bad actors would make sure to be a registered customer of all interesting targets, so that they get informed early about vulnerabilities before there is a fix. And it would create a black market for that information.

When someone gets the information “Asus BIOS has an RCE vulnerability related to driver installation”, they’ll be able to figure out the details quickly with high probability, like OP did.


You are shopping at a store along with some other customers. When entering the store, you notice that an employee of the store has left a large knife outside, under a trashcan. A shady character is wandering around the store, looking for someone to steal from, but hasn't figured out the right angle of attack yet. At some point, you (ever the responsible citizen) stand up on a table in the store and yell "Hey! Just wanted to let everyone know that there is a large, scary looking knife under the trash can outside. You have been warned." You then climb down from the table and leave the store. Knives are dangerous, after all. Immediately after your announcement the shady character goes and grabs the knife, which they then use to stab a customer on their way out of the store and steal their stuff. Unfortunately the customer didn't hear your announcement about the impending danger because they were in the toilet at the time.

Whew, thank god for public disclosure with no prior warning to the people who would've been best equipped to retrieve their knife.

---

This was clearly not the best way to handle the situation.

Sure, you didn't know that the thief was unaware of the knife before your announcement, but he sure as shit was aware afterwards. You not knowing what they know is not a good reason to indiscriminately yell to no one in particular.

I did not make the argument that obscurity is security. The knife being under a trashcan is a risk and should be addressed by management. But that doesn't mean non-obscurity automatically improves security.


Instead we get this version:

You are shopping at a store along with some other customers. When entering the store, you notice a gun laying on the ground by the door. You keep coming back every week, pointing it out, asking if that's intended or not.

They continue to ignore you, or explain how it's intended; a good thing even!

Eventually someone with malicious intent also sees the gun, picks it up, shoots a fellow customer, puts it back where it was, and walks off.

By the next day, miraculously, management will have found the time and resources to remove the gun.


Agreed, that is what often happens. But after seeing this pattern before, that does not mean the solution going forward is to yell "hey everyone there is a gun" and hope management gets to it before the person with malicious intent.

Sure, maybe management will ignore you if you tell them about the gun privately. At that point, feel free to disclose publicly. But they are guaranteed to not do anything if they don't know about it and you don't tell them (before telling everyone else including bad actors).


A better analogy would be if you see a bunch of people walking around in faulty stab vests, and you tell them that the vests are faulty before they are recalled and replaced by the company. In which case, telling everyone those vests are actually not going to stop a knife, is a very good thing to do.

> I did not make the argument that obscurity is security... But that doesn't mean non-obscurity automatically improves security.

... egad. Yes, having information doesn't mean people will do the right thing with it, but you're not everyone's mommy/god/guardian. People should have the choice themselves about what actions they want to take, and what's in their own best interests.

And obscuring the information that they need to make that choice, in the name of not making them less secure, is, ipso facto, asserting that the obscuring is keeping them more secure than they otherwise might be.

So yes, you absolutely are arguing for obscurity as security.


Sure, we can run with your analogy. So you make everyone aware that the stab vests are faulty. One of the people you make aware of this fact is a thief with a knife, who previously wasn't gonna take the risk on robbing anyone, since he only had a knife (not a gun) and everyone was wearing stab proof vests. But now he knows, so he goes for it and stabs someone. You are partially responsible for this outcome in this hypothetical scenario, as the thief didn't know beforehand about the defect and the only reason he ended up stabbing someone was due to this knowledge. Again, you not knowing whether or not the thief already knows does not excuse you if he did not and now does through your actions.

I'm arguing that unveiling the obscurity can lead to attacks that wouldn't have happened otherwise, and you are partially to blame for those if they happen (which is true). I am not saying it was "more secure" before the disclosure. Just that, in the world afterwards, you must take responsibility for everyone knowing, including people who did not know before and abuse that knowledge.


> But now he knows, so he goes for it and stabs someone.

Except his old knife he already had with him isn't made for exploiting the flaw in the vest, so it doesn't work. He needs to go home and build a new one, and the people in the mall can go home before he comes back, now that they know their vests are flawed. Otherwise, someone who comes in and is aware of the flaw when the users are not, can stab everyone, and they'd have no clue they were vulnerable.

In real-world terms, the kind of mass-exploitation that people use to fear monger about disclosure already happens everyday, and most people don't notice. The script kid installing a monero miner on your server should not be driving the conversation, it should be the IC spook recording a journalist/ dissident/ etc.

> Just that, in the world afterwards, you must take responsibility for everyone knowing, including people who did not know before and abuse that knowledge.

This is just a generalized argument for censorship of knowledge. Yes, humans can use knowledge to do bad things. No, that does not justify hiding information. No, that does not make librarians/ researchers/ teachers responsible for the actions of those that learn from them.


> Except his old knife he already had with him isn't made for exploiting the flaw in the vest, so it doesn't work.

This seems like an unnecessary constraint to bolster your point instead of actually addressing what the other person is saying.

In this analogy, why can’t the old knife exploit the flaw? If the problem with the vest allows a sharp implement through the material when inserted at the correct angle or in the correct place, any sharp object should do.

To bring this back to the real world, this is all unfolding in virtual/digital spaces. The attacker doesn’t need to physically go anywhere, nor can potential victims easily leave the store in many cases. And the attacker often needs very little time to start causing harm thanks to the landscape of tools available today.


Why should it work that way? Disclosing the vuln before fixing it seems like a surefire way for my mum to lose her life's savings. Why do you hate my mum so much?


Why not turn this around?

Why do the companies that make the software hate your mom so much they push out release after release of shit? We're all fine with these developers crapping on the floor as long as we give them 30 days to clean up their steaming pile.

If instead every release was capable of instantly ruining someone's life, maybe we'd be more capable of releasing secure software and judging what software is secure.


I disagree. The vast majority of script kiddies don't know about the zero day.

Instead of just one bad actor using that vulnerability on Andrew select targets, your proposal will have a few tens of thousands bots performing drive by attacks on millions of victims.


I think one point being made is that (in this example) you would've been much less careless about shipping the vulnerability, if you knew you'd be held accountable for it.

With current practice, you can be as sloppy and reckless as you want, and when you create vulnerabilities because of that, you somehow almost push the "responsibility" onto the person who discovers it, and you aren't discouraged from recklessness.

Personally, I think we need to keep the good part of responsible disclosure, but also phase in real penalties for the parties responsible for creating vulnerabilities that are exploited.

(A separate matter is the responsibility of parties that exploit the vulnerabilities. Some of those may warrant stronger criminal-judicial or military responses than they appear to receive.)

Ideal is a societal culture of responsibility, but in the US in some ways we've been conditioning people to be antisocial for decades, including by elevating some of the most greedy and arrogant to role models.


> you would've been much less careless about shipping the vulnerability, if you knew you'd be held accountable for it

I have a problem with this framing. Sure, some vulnerabilities are the result of recklessness, and there’s clearly a problem to be solved when it comes to companies shipping obviously shoddy code.

But many vulnerabilities happen despite great care being taken to ship quality code. It is unfortunately the nature of the beast. A sufficiently complex system will result in vulnerabilities even a careful person could not have predicted.

To me, the issue is that software now runs the world, despite these inherent limitations of human developers and the process of software development. It’s deployed in ever more critical situations, despite the industry not having well defined and enforceable standards like you’d find in some engineering disciplines.

What you’re describing is a scenario that would force developers to just stop making software, on top of putting significantly more people at risk.

I still believe the industry has a problem that needs to be solved, and it needs a broad culture shift in the dev community, but disagree that shining a bright light on every hole such that it causes massive harm to “make devs accountable” is a good or even reasonable solution.


I think that culture shift will have to come from the top in business -- the CEO and the board.

At this point, the software development field is about operating within the system decided by those others, with the goal of personally getting money.

After you've made the CEO and board accountable, I think dev culture will adapt almost immediately.

Beware of attempts to push engineering licensing or certifications, etc. as a solution here. Based on everything we've seen in the field in recent decades, that will just be used at the corporate level as a compliance letter-but-not-spirit tool to evade responsibility (as well as a moat to upstart competitors), and a vendor market opportunity for incompetent leeches.

First you make CEO and board accountable, and then let the dev culture change, and then, once you have a culture of people taking responsibility, then you'll have the foundation to add in licensing (designed in good faith) as an extra check on that, if that looks worthwhile.


>What you’re describing is a scenario that would force developers to just stop making software, on top of putting significantly more people at risk.

Good. I work in code security/SBOM, the amount of shit software from entities that should otherwise be creating secure software should worry you.

Businesses care very little about security and far more about pushing the new feature fast. And why not, there is no real penalty for it.


What is your position on open source projects? Should someone who writes software in their spare time who decides to share it publicly be forced to stop doing so?

I’m more open to harsher limits on commercial software, especially in certain categories. But underneath all of this we’re discussing an ecosystem and a culture which can’t be cleanly separated.

Some of the binary thinking I see in this thread would be deeply damaging to parts of that ecosystem with potentially major unintended consequence. Open source software is critically important for human rights/freedom. Taken at face value, many of the comments here directly threaten that freedom.

I’m not assuming that’s your stance, but I’m curious how you see the open source aspect of this considering how significant its role is - especially in the security space.


I don't have the answer here. Open source is the base of a lot of secure software. And at the same time open source software gets pulled into other functional software that has wide spread and potentially dangerous outcomes.

OpenSSL for example. Any security flaw in this package has worldwide effects, but we would be lessor without it.

Another example is the xz software that was attacked and then pulled into distributions. We were just lucky it was caught relatively early.


Therein lies the rub. Whatever the answer is, it will require careful and thoughtful solutions, not oversimplified conclusions that raking developers over the coals publicly with no warning is somehow “Good”.

To be clear, I have far less sympathy for big software shops that pump out negligently bad code and then have to be prodded to fix it, but they’re not the only players involved.


> A sufficiently complex system will result in vulnerabilities even a careful person could not have predicted.

I think as a field we're actually reasonably good at quantifying most of these risks and applying practices to reduce the risk. Once in a blue moon you do have "didn't see that coming" cases but those cause a very minor part of the damage that people suffer because of sw vulnerabilities. Most harm is caused by classes of vulnerabilities that are boringly pedestrian.


The problem with a fair warning is that once I email you such a warning, I'll never be able to anonymously publish it no matter how much you ignore the report. Then the fair thing becomes I never go public I'm confident you'll call lawyers.


So can't you disclose it anonymously? I'm pretty sure most people who are savvy enough to find zero-days know how to get an email address anonymously.


All ill say is: try it in practice. You'll quick find it dismissed as "not professional" and people will quickly claim its "irresponsible" for that reason.


Why would you care, if you publish it anonymously?


Can't you just send it from anon email?


Because there is an information disparity I could profit from instead of doing free work for you. Even if that disparity is just "posting the vuln to my blog" to get e-famous.


According to the post above, if you earned enough reputation then you might be given that one-hour window for fixing before disclosing. The issue isn't so much about whether or not there should be a "private" window but how long it lasts, especially when the editor is a multi-billion company


Let’s not forget the end users in this scenario, who will not be able to react to this as quickly as a billion dollar company regardless of how well they notify their customers.


Absolutely, which is yet another reason why this abstraction from the conditions of creation of anything tech-related is something that should be eliminated


Fair warning through "responsible" disclosure was abused again and again and again. Why should I trust company no 1000 after 999 have mislead bug reporters, the public, their customers and the rest of the world about their own "just an hour"?


You made the software, you have your paid customers, you are responsible for security of your customers. If you have an RCE that's your problem and you gotta fix it.

An hour, sure. Frequently companies sit on it for months.


Many types of vulnerabilities cannot be resolved in one hour. Some require complex thought to resolved.

One hour is absurd for another reason, what timezone are you in? And they? What country, and therefore, is it a holiday?

You may say "but vulnerability", and yes. 100% no heel dragging.

But all companies are not staffed with 100k devs, and a few days, a week is a balance between letting every script kiddie know, and the potenital that it may be exploited in the wild currently.

If one is going to counter unreasonable stupidity, use reasonable sensibility. One hour is the same as no warning.


Yes but responsible disclosure should be "you have a week (or whatever) from my first email, then I go public".


what if the vulnerability cannot be easily fixed within the week, even if the company stops all work and focus completely on the problem?

If the reason for responsible disclosure is to ensure that no members of the public is harmed as a result of said disclosure, should it not be a conversation between the security researcher and the company?

The security researcher should have an approx. idea of how or what to do to fix, and give a reasonable amount of time for a fix. If the fix ought to have been easy, then a short time should suffice, and vice versa.


If the vulnerability can't be fixed within the week, maybe the company should be SOL. This will incentivize companies to build their software better, as they'll know that any vulnerability that is hard to fix will mean consequences.

Maybe the mitigation is for the company to take its service down while it works on the problem. Again, a good incentive to avoid that in the first place. Also an incentive to not waste any time after a report comes in, to see and act on it immediately, etc.

At some point, we have to balance customer risk from disclosing immediately with companies sitting on vulnerabilities for months, vulnerabilities that may be actively exploited.


I hear what you're saying and I agree, but it's perhaps too black and white.

Let's take one of the most disastrous bugs in recent history: meltdown.

Speculative execution attacks inside the CPU. This required (in Paul Turners words): putting a warehouse of trampolines around an overly energetic 7-year old.

This, understandably took a lot of time, both for microcode and OS vendors.. it took even longer to fix it in silicone.

Not everyone is running SaaS that can deploy silently, or runs a patch cadence that can be triggered in minutes.

I work in AAA games and I'm biased, we have to pass special certifications to release patches, if your publisher has good relations, waiting for CERT by itself (after you have a validated fix) is 2 weeks.


Spectre/Meltdown is the perfect example of a vendor, Intel and AMD, deflecting blame onto the OS and software producers, successfully avoiding a recall, avoiding refunds for decreased performance and avoiding most of the blame.

What actually should have happened there is a full recall of all affected hardware. Microcode fixes and payments for lost performance in the mean time, until the new hardware arrives.

Meltdown was a desaster, but not only because the bugs themselves were bad. But also especially because we let Intel and AMD get away scott free.


There is no world in which a recall (and/or a refund) is ever possible.

Until it is demonstrated that such flaws are a life and death fault, no regulation is possible for such flaws (unlike cars - which do have such recalls for faults that have life and death implications).


In the world of physical goods, that is totally normal. Only software is different. And maybe the US.

> waiting for CERT by itself (after you have a validated fix) is 2 weeks

If the industry practice would be few days to disclosure just maybe those practices might change or maybe there would be a (extra paid) option to skip the line for urgent stuff.


And when its an OS company and the test suites take a week to run (really) ?

Dev time + test time + upload to cdn , is often longer than a week.


You know, airlines also had a ton of excuses for not making air travel so safe, it's expensive, takes a while, do you know how long these things take, etc.

Still, they did it, because we decided safety is important to us.


Airlines also get to control how equipment is used and have clear controlled deployments. Os vendors do not.


For any timeline the company can't hit, whether it's a week or 90 days, they should come up with compensating controls, detections, etc that users can implement. Managing vulnerable software isn't a new science.

> The security researcher should have an approx. idea of how or what to do to fix

Any expectation put on the security researcher beyond "maybe don't cause unnecessary shit storms with zero days" needs to be met with an offer of a fat contract.


The security researcher should have an approx. idea of how or what to do to fix.

How is that in any way the responsibility of independent randos on the internet?

If you truly believe these issues should be fixed, the right answer would be to hold companies accountable for timely security patches, overseen and managed by a government department.

I'm not sure thats a good idea, but expecting random security researchers to somehow hold massive billion dollar Enterprises accountable is silly.


> what if the vulnerability cannot be easily fixed within the week, even if the company stops all work and focus completely on the problem?

A week is an example and not a definitive value dictated by law, statute, or regulation.

When you report the vulnerability you give the developer a timeline of your plans, and if they can't make the deadline they can come back to you and request more time.


This back and forth is not possible if the researcher is anonymous. And it places all power in the hands of the developer. If the developer says, "I need a year" but the researcher doesn't give them a year, then the developer sues? Or files a criminal complaint? Why is all the risk on the researcher?

So a gunshy researcher stays anonymous to keep their risk lower. They craft a disclosure with a crypto signature. They wait for the developer to post a public announcement about the disclosure that doesn't expose a ton of detail but does include the signature hash and general guidance about what to do until a fix is released.

The researcher then posts their own anonymous public announcement with however much detail they choose. They might wait 24 hours or 7 days or 7 months. They might make multiple announcements with increasing levels of detail. Each announcement includes the original hash.

Anybody can now make an announcement at any time about the vulnerability. If an announcement is signed by the same key as the original and contains more detail than given by the developer, the public can argue back and forth about who is being more or less responsible.

Now the researcher can negotiate with the developer anonymously and publicly. The researcher can claim a bounty if they ever feel safe enough to publicly prove they are the author of the original report.

Developers who routinely demonstrate responsible disclosure can earn the trust of researchers. Individual researchers get to decide how much they trust and how patient they are willing to be. The public gets to critique after the fact whether they sympathize more with the developer or the researcher. Perhaps a jury can decide which was liable for the level of disclosure they each pursued.


this is what i presume happens today. You have a date for which disclosure will happen, and the company can request for more time.

And this is exactly what the parent poster is against - because it is possible to continuously extend this date.


You can always have a conversation if they provide justification.


You already put your tens of thousands of users at risk. The people putting bugs in the software, not the ones discovering them.


Please enlighten me on how you've managed to never write any bugs.


Well, not sure DJB posts here, but he has kept it to a minimum.

And this is mostly BS too. People don't write bug free software, they write features.

Other industries had to license professional engineers to keep this kind of crap from being a regular issue.


"Licensed professional engineers" are a software-development myth.

If all our software was as simple as a bridge, then we could have that. A bridge is 5 sheets of plans, 10 pages of founding checks, 30 pages of calculations, 100 pages of material specs. You can read all those in a day. Check the calculations in a week. Next bridge will be almost the same.

Now tell me about any software where the spec is that short and simple. /bin/cat? /bin/true? Certainly not the GNU versions of those.

Software is different because we don't build 1000 almost-identical bridges with low complexity. We always build something new and bespoke, with extremely high complexity compared to any kind of building or infrastructure. Reproduction is automatic, so there will never be routine. Totally different kind of job, where a licensed professional will not help at all.


I hate to be dismissive, but tired old meme is tired.

With what I do I work with a lot of larger companies and get to see the crap they push out with no architectural design and no initial security posture. I see apps with thousands of packages, including things like typosquats. I see the quality of the security teams which are contractors following checklists with no idea what they mean.

Saying that actual professions would make no difference sounds insane to me. Again, to me, it sounds like every other industry in saying 'self regulation is fine, we're special, we'll manage ourselves".


No. Licensed professionals are the engineering checklist people. "Not my fault, wasn't on the checklist, I've used the official approved one".

Licensed professionals checked a dam built by licensed professionals. Dam broke, killed people. Everyone claims to be innocent and the other party didn't read the right reports or didn't report the right problems: https://www.ecchr.eu/fileadmin/Fallbeschreibungen/Case_Repor... It is all just another method of shifting blame.

What really helps more than prescriptive regulation is liability. As soon as there is a strict liability for software companies, things will get better. What could also help is mandatory insurance for software producers. Then the insurance companies will either charge them big bucks or demand proof of safety and security.


> We always build something new and bespoke, with extremely high complexity compared to any kind of building or infrastructure.

Maybe this is part of the problem?


The world of software development is just different. Copying software is a solved problem, it is easy, cheap and totally normal. Whenever you start your software it is going to be copied around a few times. So the trivial "same bridge as last time, just over the next river" doesn't really exist in software. The software world has that situation solved, you get your software plus the (if so inclined) recommended OS and hardware to run it on, install it and you are fine. You can do it 1000 times over and verify that things are the same. You can even do it automatically. You can do it 1000 times over on different hardware, just rerun the test suite if you are paranoid, done. This is the level of triviality that is solved by licensed engineers when doing bridges.

Now in a few years, when maybe 3D-printing for buildings takes off, all that "fun" that is now bespoke software development will also arrive in architecture. THEN things will really be ugly ;)


Didn't say that. But I can't blame the ones publicising the bugs we put in there.

Strange wording. You are the one that put tens of thousands of your users at risk. Not the one who discovers the problem.


If you forget your shop's door open after hours, and someone starts shouting "HEY GUYS! THIS DOOR IS OPEN! LOOK!", I have a hard time putting 100% of the blame on you.


If I point out the bridge is cracking and you get angry about it, I'm blaming the idiots that engineered a crap bridge and didn't maintain it.

Maybe it's time we get professional standards if this is how we are going to behave?


This seems like a fallacious analogy to me.

Why is a cracked bridge dangerous? Because anyone traveling over it or under it is at risk of being hurt if the bridge collapses. Warning people that it is cracking does not increase the likelihood of a collapse.

Why is a software vulnerability dangerous? Because anyone who knows about it and has nefarious intent can now use it as a weapon against those who are using the vulnerable software, and the world is full of malicious actors actively seeking new avenues to carry out attacks.

And there are quite a few people who would exploit the knowledge of an unlocked door if given the chance.

There’s a very clear difference in the implications between these scenarios.


A cracked bridge is always dangerous.

A vulnerable piece of software is always dangerous.

There are large numbers of state funded exploit groups and otherwise blackhat organizations that find and store these vulnerabilities waiting for the right opportunity, say economic warfare.

Much like building safe bridges from the start we need the same ideology in software. The 'we can always patch it later' is eventually going to screw us over hard.


I agree with the conclusion that we need safer software from the start.

But we also have to deal with the reality of the situation in front of us.

I will maintain that the differences between the implications of revealing a crack in a bridge vs. prematurely revealing a vulnerability to literally the entire world are stark. I find it pretty problematic to continue comparing them and a rather poor analogy.

> There are large numbers of state funded exploit groups and otherwise blackhat organizations that find and store these vulnerabilities

This underscores my point. What you’ve been describing is a scenario in which those organizations are handed new ammunition for free (assuming they don’t already have the vuln in their catalog).


They didn't "forget" to lock the door that one time. They just never lock it. The guy yelling it out loud is pissing off all the people who already knew you didn't. He is not the one to be angry at.

That's because nobody actually cares about security nor do they want to pay for it. I'm a security champion at my company and security related work gets pushed off as much as possible to focus on feature work. If we actually wanted security to be a priority, they would employ security champions who's only job was to work on security aspects of the system instead of trying to balance security and feature work, because feature work will always prevail.


It's such a loaded term that I refuse to use it. "vendor-coordinated disclosure" is a much better term, imho

(and in the world of FOSS you might have "maintainer-coordinated" too)


> "Responsible" disclosure is paradoxically named because actually it is completely irresponsible.

It's only paradoxical if you've never considered the inherent conflicts present in everything before.

The "responsible" in "responsible disclosure" relates to the researchers responsibility to the producer, not the companies responsibility to their customers. The philosophical implication is that the product does what it was designed to do, now you (the security researcher) is making it do something you don't think it should do, and so you should be responsible for how you get that out there. Otherwise you are damaging me, the corporation, and that's just irresponsible.

As software guys we probably consider security issues a design problem. The software has a defect, and it should be fixed. A breakdown in the responsibility of the corporation to their customer. "Responsible disclosure" considers it external to the software. My customers are perfectly happy, you have decided to tell them that they shouldn't be. You've made a product that destroys my product, you need to make sure you don't destroy my product before you release it.

The security researcher is not primarily responsible to the public, they are responsible to the corporation.

It's not a paradox, it's just a simple inversion of responsibility.


> The security researcher is not primarily responsible to the public, they are responsible to the corporation.

Unless the researcher works for the corporation on an in-house security team, what’s your reasoning for this?

Why are they more responsible to the corporation they don’t work for than for to the people they’re protecting (depending on the personal motivations of the individual security researcher I guess).


With "simple reversion of responsibility" do you mean your twisted logic of "everyone should think first and foremost about my profits"?


What about damage control? I would argue your "anonymous, immediate disclosure" to the public (filled with bad actors) would be rubbing salt in the wound (allow more people to exploit the vulnerability before it's fixed). That's why nobody publishes writeups before the vuln is fixed. Even if corporations don't fix vulns in time, I can only see harm being done from not privately reporting them.


>I can only see harm being done from not privately reporting them

Because you need to take a look at the fuller picture. If every vuln was published immediately the entire industry would need to be designed differently. We wouldn't push features at a hundred miles per hour but instead have pipelines more optimized for security and correctness.

There is almost no downside currently for me to write insecure shit, someone else will debug it for me and I'll have months to fix it.


I mean, to be a bit more reasonable, there's a middle ground here. Maybe disclosing a massive RCE Vulnerability in software used by a lot of companies on 25th of December is not a good Idea. And perhaps an Open Source Dev with a security@project mail deserves a tad more help and patience than a megacorp with a record of shitty security management. And if you are a company that takes security serious and is responsive to security researchers inquiries they deserve at least the chance to fix it fast and before it becomes public.

It's just that there are some companies EVERYONE knows are shitty. ASUS is one of them.


You are right about open source developers who do this on the side, as a hobby, and even if they don't are usually underpaid and understaffed. They do deserve more time and a different approach.

But corporations making big bucks from their software need to be able to fix things quickly. They took money for their software, so it is their responsibility. If they cannot react on a public holiday, tough luck. Just look at their payment terms. Do they want their money within 30 days or 25 work days? Usually it is the former, they don't care about your holidays, so why should anyone care about theirs? Also, the bad guys don't care about their victims' holidays. You are just giving them extra time to exploit. The only valid argument would be that the victims might not be reading the news about your disclosure on a holiday. But since you are again arguing about software used by a lot of companies (as opposed to private users), I don't see a problem there. They also have their guards on duty and their maintenance staff on call for a broken pipe or something.

What's most important is that I'm saying we should revert the "benefit of the doubt". A vast majority of corporations have shitty security handling. Even the likes of Google talk big with their 90 day time window from private irresponsible disclosure to public disclosure. And even Google regularly fails to fix things within those 90 days. So the default must be immediate public and full disclosure. Only when companies have proven their worth by correctly reacting to a number of those, then they can be given the "benefit of the doubt" and a heads up.

Because otherwise, when the default is irresponsible private disclosure, they will never have any incentive to get better. Their users will always be in danger unknowingly. The market will not have information to decide whether to continue buying from them. The situation will only get worse.


> But corporations making big bucks from their software need to be able to fix things quickly. They took money for their software, so it is their responsibility. If they cannot react on a public holiday, tough luck.

Because it is not corporations who are reacting on public holidays, but developer human beings.

It is not corporations that are reacting to install patches on a Friday, but us sysadmins who are human beings.


Companies will act out of greed and use their customers and developers as "human shields" to get out of their responsibility. Your on-call duty should be paid by the hour just as any duty, doubling the pay on weekends, holidays and nights. "But the poor developers" is just the "we will hurt this poor innocent puppy"-defense. The evil ones are the ones inflicting the hurt, the greedy companies. Not the reporters.


Overall, I share your reasoning and would concur mostly but there are some rather important caviats, especially regarding this one:

> The only valid argument would be that the victims might not be reading the news about your disclosure on a holiday. But since you are again arguing about software used by a lot of companies (as opposed to private users), I don't see a problem there.

Let's say MegacorpA is a big Software Vendor that makes some kind of Software other Companies use to manage some really sensitive user data. Even if MegacorpA fixes their stuff on the 25th 2 hours after they got an e-mail from you, all their clients might not react that fast and thus a public disclosure could cause massive harm to end users, even if MegacorpA did everything right.

Ultimately, I guess my argument is that there's not a one size fits all solution. But "responsible disclosure" should be reserved for companies acting responsibly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: