Both exploit Spectre V2, but in different ways. My takeaway:
Training Solo:
- Enter the kernel (and switch privilege level) and “self train” to mispredict branches to a disclosure gadget, leak memory.
Branch predictor race conditions:
- Enter the kernel while your trained branch predictor updates are still in flight, causing the updates to be associated with the wrong privilege level. Again, use this to redirect a branch in the kernel to a disclosure gadget, leak memory.
Is that true? I thought you could pay for a H1 service that basically had professionals triaging the vulnerabilities and only pass on the correct ones?
> This creates a strong incentive for the company to spend resources that they otherwise have no desire to spend on security
Sometimes though, “Responsible Disclosure” or CVD is creating an incentive to silence security issues and long lead times for fixes. Going public fast is arguably more sustainable in the long run as it forces companies and clients to really get their shit together.
On the one hand, not what you should expect from university ethics. On the other hand, this does happen 100% in covert ways and the “real” studies are used for bad.
Though I do not agree with the researchers, I do not think the right answer is to “cancel culture” them away.
It’s also crazy because the Reddit business is also a big AI business itself, training on your data and selling your data. Ethics, ethics.
What is Reddit doing to protect its users from this real risk?
Agree about meat, however, the article still made me think.
> What people see feels organic. In reality, they’re engaging with what’s already been filtered, ranked, and surfaced.
Naturally, I— and I think many humans have this too- often perceive comments/content that I see as a backscatter of organic content reflecting some sort of consensus. Thousands of people replying the same thing surely gives me the impression of consensus. Obviously, this is not necessarily the truth (and it may be far from it even). However, it remains interesting, because since more people may perceive it as such, it may become consensus after all regardless.
Ultimately, I think it’s good to be reminded about the fact that it’s still algorithmic knobs at play here. Even if that’s something that is not news.
Training Solo: - Enter the kernel (and switch privilege level) and “self train” to mispredict branches to a disclosure gadget, leak memory.
Branch predictor race conditions: - Enter the kernel while your trained branch predictor updates are still in flight, causing the updates to be associated with the wrong privilege level. Again, use this to redirect a branch in the kernel to a disclosure gadget, leak memory.
reply