Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s not bad for summarizing or translating.

I like categorize AI outputs by prompt + context input information size vs output information size.

Summaries: output < input. It’s pretty good at this for most low-to-medium stakes tasks.

Translate: output ≈ input but in different format/language. It’s decent at this, but requires more checking.

Generative expansion: output > input. This is where the danger is. Like asking for a cheeseburger and it infers a sesame seed bun because that matches its model of a cheeseburger. Generally that’s fine. Unless you’re deathly allergic to sesame seeds. Then it’s a big problem. So you have to be careful in these cases. And, at best, the anything inferred beyond the input is average by definition. Hence AI slop.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: