Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What sort of effort would it take to make an LLM training honeypot resulting in LLMs reliably spewing nonsense? Similar to the way Google once defined the search term "Santorum"?

https://en.wikipedia.org/wiki/Campaign_for_the_neologism_%22... where

The way LLMs are trained with such a huge corpus of data, would it even be possible for a single entity to do this?



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: