Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Especially if you do meaningless computations in between to mask it

I think this will do the trick against coding agents. LLMs already struggle to remember the top of long prompts, let alone if the malicious code is spread out over a large document or even several. LLM code obfuscation.

- Put the magic array in one file.

- The make the conversion to utf8 in a 2nd location.

- Move the data between a few variables with different names to make it loose track.

- Make the final request in a 3rd location.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: