Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Indeed. The unspoken requirement behind (too) smart interpreters is 'I don't want to spend time segregating permissions and want a do-anything machine.'

Since time immemorial, that turns out to be a very bad idea.

It was with computing hardware. With OSs. With networks. With the web. With the cloud. And now with LLMs.

>> (from parent) Sometimes [routing different data to agents with more narrowly defined scopes and access rights] will work, but then it will work by relying on a sufficiently primitive interpreter to separate the data streams before it reaches the smart ones.

This is and always will be the solution.

If you have security-critical actions, then you must minimize the attack surface against them. This inherently means (a) identifying security-critical actions, (b) limiting functionality with them to well-defined micro-actions with well-defined and specific authorizations, and (c) solving UX challenges around requesting specific authorizations.

The peril of LLM-on-LLM as a solution to this is that it's the security equivalent of a Rorschach inkblot: dev teams stare at it long enough and convince themselves they see the guarantees they want.

But they're hallucinating.

As was quipped elsewhere in this discussion, there is no 99% secure for known vulnerabilities. If something is 1% insecure, that 1% can (and will) be targeted by 100% of attacks.



> 'I don't want to spend time segregating permissions and want a do-anything machine.'

Yes. It's a valid goal, and we'll keep pursuing it because it's a valid goal. There is no universal solution to this, but there are solutions for specific conditions.

> Since time immemorial, that turns out to be a very bad idea.

> It was with computing hardware. With OSs. With networks. With the web. With the cloud. And now with LLMs.

Nah. This way of thinking is the security people's variant of "only do things that scale", and it's what leads to hare-brained ideas like "let's replace laws and banking with smart contracts because you can't rely on trust at scale".

Not every system needs to be secure against everything. Systems that are fundamentally insecure in some scenarios are perfectly fine, as long as they're not exposed to those problem scenarios. That's how things work in the real world.

> If you have security-critical actions, then you must minimize the attack surface against them.

Now that's a better take. Minimize, not throw in the towel because the attack surface exists.


> Not every system needs to be secure against everything. Systems that are fundamentally insecure in some scenarios are perfectly fine, as long as they're not exposed to those problem scenarios.

That's a vanishingly rare situation, that I'm surprised to see you arguing for, given your other comments about the futility of enforcing invariants on reality. ;)

If something does meaningful and valuable work, that almost always means it's also valuable to exploit.

We can agree that if you're talking resource-commitment risk (i.e. must spend this much to exploit), there are insecure systems that are effective to implement, because the cost of exploitation exceeds the benefit. (Though warning: technological progress)

But fundamentally insecure systems are rare in practice for a reason.


And fundamentally insecure systems sooner or later get connected to things that should be secure and then become stepping stones in an exploit. These are lessons that should be learned by now.


> Indeed. The unspoken requirement behind (too) smart interpreters is 'I don't want to spend time segregating permissions and want a do-anything machine.'

> Since time immemorial, that turns out to be a very bad idea.

Sometimes you can't, or it costs more to do it than it costs to accept the risk or insure against the possible bad outcomes.

Mitigating every risk is bad risk management.

But we can presumably agree that you shouldn't blindly go into this. If you choose to accept those risks, it needs to be a conscious choice - a result of actually understanding that the risk is there, and the possible repercussions.

> This is and always will be the solution.

It's the solution when it doesn't prevent meeting the goal.

Sometimes accepting risks is the correct risk management strategy.

Risk management is never just mitigation - it is figuring out the correct tradeoff between accepting, mitigating, transferring, or insuring against the risk.


>>> [you] Sometimes [routing different data to agents with more narrowly defined scopes and access rights] will work, but then it will work by relying on a sufficiently primitive interpreter to separate the data streams before it reaches the smart ones.

>> [me] This is and always will be the solution.

> [you] It's the solution when it doesn't prevent meeting the goal.

I may have over-buried the antecedent, there.

The point being that clamping the possibility space of input fields upstream of an LLM, via more primitive and deterministic evaluation, is an effective way to also clamp LLM behavior/outputs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: