Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
rkangel
on May 15, 2023
|
parent
|
context
|
favorite
| on:
The Dual LLM pattern for building AI assistants th...
In this model though, the person who can check that prompt injection was being resisted is the user using it, who
wants
that resistance.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: