If the criteria for bug is "not intended", and that's solely judged by the company, then broken auth et al. suddenly become part of their product design.
I can see this argument making a bit of sense, but if they documented this 3 years after the issue was reported, they don't have a way to demonstrate that they truly already knew.
At the end it boils down to: is Github being honest and fair in answering the bug bounty reports?
If you think it is, cool.
If you don't, maybe it's not worth playing ball with Github's bug bounty process
It doesn't matter if they knew. If they don't deem it a security vulnerability --- and they have put their money where their mouth is, by documenting it as part of the platform behavior --- it's not eligible for a payout. It can be a bug, but if it's not the kind of bug the bounty program is designed to address, it's not getting paid out. The incentives you create by paying for every random non-vulnerability are really bad.
The subtext of this thread is that companies should reward any research that turns up surprising or user-hostile behavior in products. It's good to want things. But that is not the point of a security bug bounty.
> The incentives you create by paying for every random non-vulnerability are really bad.
So much this. It's pretty clear that most people commenting on this thread have never been involved in a bug bounty program on the company's side.
Bug bounty programs get a lot of reports, most of which are frankly useless and many of which are cases of intended behavior subjectively perceived as problematic. Sifting through that mess is a lot of work, and if you regularly pay out on unhelpful reports you end up with many more unhelpful reports.
This particular case definitely feels like one where the intended behavior is horribly broken, but there are absolutely many cases where "this is intended" is the only valid answer to a report.
That's true, but what's stopping a company from documenting a security issue as a known (mis)behaviour/bug? [*]
Companies can join/setup a bug bounty program, and just use it as a fig leaf for pretending to care about their own product/service's security.
Of course bug bounties can and are abused daily by people who report trivial non-issues in the hope of compensation
But in the same way, companies can also be bad actors in the way that they engage with bounties. I would usually expect big names (like Google, Apple, Github, etc.) to be trustworthy...
[*] Of course what stops companies is precisely them not being seen as trustworthy actors in the bug bounty system anymore... And for now, that's a decision that individuals have to make themselves
No large company cares even a tiny bit about the money they're spending on bug bounties. They would literally lose money trying to cheat, because it would cost them more in labor to argue with people than to pay out. In reality, the bounty teams at Google and Apple are incentivized to maximize payouts, not minimize them.
If you don't trust the company running a bounty, don't participate. There are more lucrative ways to put vulnerability research skill to use.
Github chooses to store all "Github forks" in the same repository, and allow accessing things in that repository even when they are not reachable by the refs in the namespace of one "fork". That is purely a Github decision.
They could have split forks off into new repos, but then they wouldn’t be forks, in the repository sense. It was never hard to just copy a repo instead of forking it. The UX just leads people into holding it wrong.