Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’ve thought about this and tested it a lot, it doesn’t work.

The reason is, the models don’t understand content or even context, they just recognize patterns and can generate similar patterns which we then interpret.

Case in point, a photo of an Astronaut on a horse is not actually an astronaut on a horse, it’s a 2D pixel map of light that our eyes then interpret to mean an astronaut on a horse. It’s even easier to understand this listening to the generated singing. It sounds just like singing - but it’s not language at all and doesn’t mean anything, it’s just a very similar pattern.

These AI models are great at making things that look like a pattern that to us means something, but they don’t generate semantically meaningful content itself.

So when you do this with GPT3 or other models and long-form narrative it falls apart pretty fast as the model can’t keep straight things like characters and their internal personalities and motives, nor the overall plot arc.

But! You can definitely feed in prompts and ask questions to get ideas and boilerplate - descriptions of people and places come out especially well - and then you can edit that and use it to accelerate your writing process.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: