> If the luxury leads to the exploits you should do without the luxury.
One man's luxury is another man's essential.
It's easy to criticize toy examples that deliver worse results than the standard approach, and expose users to excessive danger in the process. Sure, maybe let's not keep doing that. But that's not an actual solution - that's just being timid.
Security isn't an end in itself, it's merely a means to achieve an end in a safe way, and should always be thought as subordinate to the goal. The question isn't whether we can do something 100% safely - the question is whether we can minimize or mitigate the security compromises enough to make the goal still worth it, and how to do it.
When I point out that some problems are unsolvable for fundamental reasons, I'm not saying we should stop plugging LLMs to things. I'm saying we should stop wasting time looking for solutions to unsolvable problems, and focus on possible solutions/mitigations that can be applied elsewhere.
One man's luxury is another man's essential.
It's easy to criticize toy examples that deliver worse results than the standard approach, and expose users to excessive danger in the process. Sure, maybe let's not keep doing that. But that's not an actual solution - that's just being timid.
Security isn't an end in itself, it's merely a means to achieve an end in a safe way, and should always be thought as subordinate to the goal. The question isn't whether we can do something 100% safely - the question is whether we can minimize or mitigate the security compromises enough to make the goal still worth it, and how to do it.
When I point out that some problems are unsolvable for fundamental reasons, I'm not saying we should stop plugging LLMs to things. I'm saying we should stop wasting time looking for solutions to unsolvable problems, and focus on possible solutions/mitigations that can be applied elsewhere.