I cancelled my coderabbit paid subscription, because it always worries me when a post has to go viral on HN for a company to even acknowledge an issue occurred. Their blogs are clean of any mention of this vulnerability and they don't have any new posts today either.
I understand mistakes happen, but lack of transparency when these happen makes them look bad.
Both articles were published today. It seems to me that the researchers and coderabbit agreed to publish on the same day. This is a common practice when the company decides to disclose at all (disclosure is not required unless customer data was leaked and there's evidence of that, they are choosing to disclose unnecessarily here).
When the security researchers praise the response, it's a good sign tbh.
The early version of the researcher's article didn't have the whole first section where they "appreciate CodeRabbit’s swift action after we reported this security vulnerability" and the subsequent CodeRabbit talking points.
> The researchers identified that Rubocop, one of our tools, was running outside our secure sandbox environment — a configuration that deviated from our standard security protocols.
This is still ultra-LLM-speak (and no, not just because of the em-dash).
A few years ago such phrases would have been candidates for a game of bullshit bingo, now all the BS has been ingested by LLMs and is being regurgitated upon us in purified form...
Absolutely. In my experience every AI startup is full of AI maximalists. They use AI for everything they can - in part because they believe in the hype, in part to keep up to date with model capabilities. They would absolutely go so far as to write such an important piece of text using an LLM.
I wonder how many of these intern-type tasks LLMs have taken away. The type of tasks I did as a newbie might have seemed not so relevant to the main responsibilities but they helped me get institutional knowledge and generally get a feel of "how things work" and who/how to talk to make progress. Now the intern will probably do it using LLMs instead to talking to other people. Maybe the results will be better but that interaction is gone.
I think there is an infinite capacity for LLMs to be both beneficial, or negative. I look back at learning and think, man, how amazing would it have been if I could have had a personalized tutor helping guide me and teach me about the concepts I was having trouble with in school. I think about when I was learning to program and didn’t have the words to describe the question I was trying to ask and felt stupid or an inconvenience when trying to ask to more experienced devs.
Then on the flip side, I’m not just worried about an intern using an LLM. I’m worried about the unmonitored LLM performing intern, junior, and ops tasks, and then companies simply using “an LLM did it” as a scapegoat for their extreme cost cutting.
They first disabled rubocop to prevent further exploit, then rotated keys. If they awaited deploying the fix that would mean letting compromised keys remain valid for 9 more hours. According to their response all other tools were already sandboxed.
However their response doesn't remediate putting secrets into environment variables in the first place - that is apparently acceptable to them and sets off a red flag for me.
Yeah, I thought the same. They were really unlucky, the only analyzer that let you include and run code was the one outside of the sandbox. What were the chances?
> putting secrets into environment variables in the first place - that is apparently acceptable to them and sets off a red flag for me
Isn't that standard? The other options I've seen are .env files (amazing dev experience but not as secure), and AWS Secrets Manager and similar competition like Infisical. Even in the latter, you need keys to authenticate with the secrets manager and I believe it's recommended to store those as env vars.
They weren't published together. They managed to get the researchers to add CodeRabbit's talking points in after the fact, check out the blue text on the right hand side.
Most security bugs get fixed without any public notice. Unless there was any breach of customer information (and that can be often verified), there are typically no legal requirements. And there's no real benefit to doing it either. Why would you expect it to happen?
> Unless there was any breach of customer information (and that can be often verified), there are typically no legal requirements.
If the company is regulated by the SEC I believe you will find that any “material” breach is reportable after the determination of materiality is reached, since at least 2023.
Sure. And these types of "we fixed it and confirmed nobody actually exploited it" issues are not always treated as material. You can confirm that for example by checking SEC reports for each cve in commercial VPN gateways... or lack of.
I understand mistakes happen, but lack of transparency when these happen makes them look bad.