> Given a high level of false positives, it's probably not uncommon AT ALL that sometimes it takes a couple of bug reports before something is reproducible or generates a high enough alert/credibility status, as seemed to have happened here.
This case was not the case of eventually the same reports being taken seriously. None of them were, until the author met people working at Google in person at some event, and him showing them the issue and then persisting.
Different than "Ops, we received a couple of requests, better look into it" and more like "this guy won't stop bothering us about it, probably should look into it".
Security reports from proper pentesters tend to include easy to reproduce steps and if you can't reproduce it yourself from that, you can ask them to expand, since it's in their interest for you to be able to understand them, since that's how they get paid.
> Security reports from proper pentesters tend to include easy to reproduce steps and if you can't reproduce it yourself from that, you can ask them to expand, since it's in their interest for you to be able to understand them, since that's how they get paid.
Fair point, but it's also in their interest to overestimate the impact of the bug they found. And, even if the reports are well written, many reports that I've seen (mostly from new gray hats) were not actually exploitable, even with aggressive poc code.
This case was not the case of eventually the same reports being taken seriously. None of them were, until the author met people working at Google in person at some event, and him showing them the issue and then persisting.
Different than "Ops, we received a couple of requests, better look into it" and more like "this guy won't stop bothering us about it, probably should look into it".
Security reports from proper pentesters tend to include easy to reproduce steps and if you can't reproduce it yourself from that, you can ask them to expand, since it's in their interest for you to be able to understand them, since that's how they get paid.