Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Indeed. The unspoken requirement behind (too) smart interpreters is 'I don't want to spend time segregating permissions and want a do-anything machine.'

> Since time immemorial, that turns out to be a very bad idea.

Sometimes you can't, or it costs more to do it than it costs to accept the risk or insure against the possible bad outcomes.

Mitigating every risk is bad risk management.

But we can presumably agree that you shouldn't blindly go into this. If you choose to accept those risks, it needs to be a conscious choice - a result of actually understanding that the risk is there, and the possible repercussions.

> This is and always will be the solution.

It's the solution when it doesn't prevent meeting the goal.

Sometimes accepting risks is the correct risk management strategy.

Risk management is never just mitigation - it is figuring out the correct tradeoff between accepting, mitigating, transferring, or insuring against the risk.



>>> [you] Sometimes [routing different data to agents with more narrowly defined scopes and access rights] will work, but then it will work by relying on a sufficiently primitive interpreter to separate the data streams before it reaches the smart ones.

>> [me] This is and always will be the solution.

> [you] It's the solution when it doesn't prevent meeting the goal.

I may have over-buried the antecedent, there.

The point being that clamping the possibility space of input fields upstream of an LLM, via more primitive and deterministic evaluation, is an effective way to also clamp LLM behavior/outputs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: