Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While I'm far from an expert in security, the time I've spent studying cryptography and cryptosystem design has made me extremely wary of words like "encourage" and "discourage", and "significantly lowered the chances" as a means of achieving security.

I'm honestly a bit surprised this is a the public response to actions being taken to increase security around attacks like these. Cryptosystems are not built around "being really hopeful" but making mathematical guarantees about the properties of the system (and of course, even then no system is perfect nor should be treated as such).

This reads more like "engineering optimism" than the "professional paranoia" encouraged by Schneier et al in Cryptography Engineering.



Yeah this is insane, and it highlights the fact that fundamental strength of LLMs is also its fundamental weakness: it’s a probabilistic black box, not a deterministic algorithm. By its very nature, you cannot secure a probabilistic black box, and you certainly can’t give it permissions that allow it access to sensitive data. The people working on this have got to realize this, but they’re doing it anyway.

I was recently part of a team at work that was taking a look at a product that uses LLMs to prepare corporate taxes. I have nothing to do with accounting, but I was on the demo because of my technical knowledge. The guys on the other end of the call were hyping this thing to no end, thinking we were all accountants. As expected, the accountants I work with were eating it up until I started asking about a word they were not even aware of in the context of these systems: hallucination. I asked what the hallucination rate was and whether they’ve had issues with the system just making up numbers. They responded with “it happens but I would say it’s accurate 98% of the time.” They said that with a straight face. The number told me they don’t actually know the hallucination rate, and this is not the kind of work where you want to fuck it up any percent of the time. Hallucinations are incompatible with corporate finance.

Again - using a probabilistic tool where only a deterministic tool will do.


> The people working on this have got to realize this, but they’re doing it anyway.

This is the most horrific part of all of this, including using the LLMs on everything and it is industry wide.

> They responded with “it happens but I would say it’s accurate 98% of the time.” They said that with a straight face. The number told me they don’t actually know the hallucination rate, and this is not the kind of work where you want to fuck it up any percent of the time. Hallucinations are incompatible with corporate finance.

Also incompatible with safety critical systems, medical equipment and space technology where LLMs are completely off limits and the mistakes are irreversable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: