Hacker News new | past | comments | ask | show | jobs | submit login

An hour, sure. Frequently companies sit on it for months.





Many types of vulnerabilities cannot be resolved in one hour. Some require complex thought to resolved.

One hour is absurd for another reason, what timezone are you in? And they? What country, and therefore, is it a holiday?

You may say "but vulnerability", and yes. 100% no heel dragging.

But all companies are not staffed with 100k devs, and a few days, a week is a balance between letting every script kiddie know, and the potenital that it may be exploited in the wild currently.

If one is going to counter unreasonable stupidity, use reasonable sensibility. One hour is the same as no warning.


Yes but responsible disclosure should be "you have a week (or whatever) from my first email, then I go public".

what if the vulnerability cannot be easily fixed within the week, even if the company stops all work and focus completely on the problem?

If the reason for responsible disclosure is to ensure that no members of the public is harmed as a result of said disclosure, should it not be a conversation between the security researcher and the company?

The security researcher should have an approx. idea of how or what to do to fix, and give a reasonable amount of time for a fix. If the fix ought to have been easy, then a short time should suffice, and vice versa.


If the vulnerability can't be fixed within the week, maybe the company should be SOL. This will incentivize companies to build their software better, as they'll know that any vulnerability that is hard to fix will mean consequences.

Maybe the mitigation is for the company to take its service down while it works on the problem. Again, a good incentive to avoid that in the first place. Also an incentive to not waste any time after a report comes in, to see and act on it immediately, etc.

At some point, we have to balance customer risk from disclosing immediately with companies sitting on vulnerabilities for months, vulnerabilities that may be actively exploited.


I hear what you're saying and I agree, but it's perhaps too black and white.

Let's take one of the most disastrous bugs in recent history: meltdown.

Speculative execution attacks inside the CPU. This required (in Paul Turners words): putting a warehouse of trampolines around an overly energetic 7-year old.

This, understandably took a lot of time, both for microcode and OS vendors.. it took even longer to fix it in silicone.

Not everyone is running SaaS that can deploy silently, or runs a patch cadence that can be triggered in minutes.

I work in AAA games and I'm biased, we have to pass special certifications to release patches, if your publisher has good relations, waiting for CERT by itself (after you have a validated fix) is 2 weeks.


Spectre/Meltdown is the perfect example of a vendor, Intel and AMD, deflecting blame onto the OS and software producers, successfully avoiding a recall, avoiding refunds for decreased performance and avoiding most of the blame.

What actually should have happened there is a full recall of all affected hardware. Microcode fixes and payments for lost performance in the mean time, until the new hardware arrives.

Meltdown was a desaster, but not only because the bugs themselves were bad. But also especially because we let Intel and AMD get away scott free.


There is no world in which a recall (and/or a refund) is ever possible.

Until it is demonstrated that such flaws are a life and death fault, no regulation is possible for such flaws (unlike cars - which do have such recalls for faults that have life and death implications).


In the world of physical goods, that is totally normal. Only software is different. And maybe the US.

> waiting for CERT by itself (after you have a validated fix) is 2 weeks

If the industry practice would be few days to disclosure just maybe those practices might change or maybe there would be a (extra paid) option to skip the line for urgent stuff.


And when its an OS company and the test suites take a week to run (really) ?

Dev time + test time + upload to cdn , is often longer than a week.


You know, airlines also had a ton of excuses for not making air travel so safe, it's expensive, takes a while, do you know how long these things take, etc.

Still, they did it, because we decided safety is important to us.


Airlines also get to control how equipment is used and have clear controlled deployments. Os vendors do not.

For any timeline the company can't hit, whether it's a week or 90 days, they should come up with compensating controls, detections, etc that users can implement. Managing vulnerable software isn't a new science.

> The security researcher should have an approx. idea of how or what to do to fix

Any expectation put on the security researcher beyond "maybe don't cause unnecessary shit storms with zero days" needs to be met with an offer of a fat contract.


The security researcher should have an approx. idea of how or what to do to fix.

How is that in any way the responsibility of independent randos on the internet?

If you truly believe these issues should be fixed, the right answer would be to hold companies accountable for timely security patches, overseen and managed by a government department.

I'm not sure thats a good idea, but expecting random security researchers to somehow hold massive billion dollar Enterprises accountable is silly.


> what if the vulnerability cannot be easily fixed within the week, even if the company stops all work and focus completely on the problem?

A week is an example and not a definitive value dictated by law, statute, or regulation.

When you report the vulnerability you give the developer a timeline of your plans, and if they can't make the deadline they can come back to you and request more time.


This back and forth is not possible if the researcher is anonymous. And it places all power in the hands of the developer. If the developer says, "I need a year" but the researcher doesn't give them a year, then the developer sues? Or files a criminal complaint? Why is all the risk on the researcher?

So a gunshy researcher stays anonymous to keep their risk lower. They craft a disclosure with a crypto signature. They wait for the developer to post a public announcement about the disclosure that doesn't expose a ton of detail but does include the signature hash and general guidance about what to do until a fix is released.

The researcher then posts their own anonymous public announcement with however much detail they choose. They might wait 24 hours or 7 days or 7 months. They might make multiple announcements with increasing levels of detail. Each announcement includes the original hash.

Anybody can now make an announcement at any time about the vulnerability. If an announcement is signed by the same key as the original and contains more detail than given by the developer, the public can argue back and forth about who is being more or less responsible.

Now the researcher can negotiate with the developer anonymously and publicly. The researcher can claim a bounty if they ever feel safe enough to publicly prove they are the author of the original report.

Developers who routinely demonstrate responsible disclosure can earn the trust of researchers. Individual researchers get to decide how much they trust and how patient they are willing to be. The public gets to critique after the fact whether they sympathize more with the developer or the researcher. Perhaps a jury can decide which was liable for the level of disclosure they each pursued.


this is what i presume happens today. You have a date for which disclosure will happen, and the company can request for more time.

And this is exactly what the parent poster is against - because it is possible to continuously extend this date.


You can always have a conversation if they provide justification.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: