Hacker News new | past | comments | ask | show | jobs | submit | hashstring's comments login

Both exploit Spectre V2, but in different ways. My takeaway:

Training Solo: - Enter the kernel (and switch privilege level) and “self train” to mispredict branches to a disclosure gadget, leak memory.

Branch predictor race conditions: - Enter the kernel while your trained branch predictor updates are still in flight, causing the updates to be associated with the wrong privilege level. Again, use this to redirect a branch in the kernel to a disclosure gadget, leak memory.


Thought of something along the lines of this too before.

I think there is serious potential for this.


Is that true? I thought you could pay for a H1 service that basically had professionals triaging the vulnerabilities and only pass on the correct ones?

Our company pays for one of these third party triage services for H1.

The quality is seriously lacking. They have dismissed many valid findings.


Ah thank you for the info!

From what I understood, the service is also (very) expensive. Wild.


There was this TikTok internal memo that stated something like:

US teenagers are the ideal target market for TikTok, because they (get to) spend so much of their time on that app.


Thumbs up. I think this is a step in the right direction.

No matter how you flip it, net result is negative. These apps are like crack, and it’s morally right to protect developing brains from that.


> This creates a strong incentive for the company to spend resources that they otherwise have no desire to spend on security

Sometimes though, “Responsible Disclosure” or CVD is creating an incentive to silence security issues and long lead times for fixes. Going public fast is arguably more sustainable in the long run as it forces companies and clients to really get their shit together.


Agree with the duality.

On the one hand, not what you should expect from university ethics. On the other hand, this does happen 100% in covert ways and the “real” studies are used for bad.

Though I do not agree with the researchers, I do not think the right answer is to “cancel culture” them away.

It’s also crazy because the Reddit business is also a big AI business itself, training on your data and selling your data. Ethics, ethics.

What is Reddit doing to protect its users from this real risk?


That's a pretty interesting point in itself.

If AI is training on Reddit posts, and people are using AI to post on Reddit, then AI is providing the data it is trained with.


Yes, this 100%.

And never forgot the, isElon boolean var that would increase post visibility. lol, what a shame.


Agree about meat, however, the article still made me think.

> What people see feels organic. In reality, they’re engaging with what’s already been filtered, ranked, and surfaced. Naturally, I— and I think many humans have this too- often perceive comments/content that I see as a backscatter of organic content reflecting some sort of consensus. Thousands of people replying the same thing surely gives me the impression of consensus. Obviously, this is not necessarily the truth (and it may be far from it even). However, it remains interesting, because since more people may perceive it as such, it may become consensus after all regardless.

Ultimately, I think it’s good to be reminded about the fact that it’s still algorithmic knobs at play here. Even if that’s something that is not news.


Haha, and the Github repo is now offline. lol.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: