Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I wonder, is it because the model is trying to produce the most song-ish song that makes it mediocre?

I think there’s a simpler explanation. Good creative work needs an element of intelligently targeted unpredictability. It’s mayhaps not the best idea to ask a prediction model about that.

I don’t get how it took people so long to figure this out. In the first week, I was trying to get it to write a story in the style of Dostoyevsky, and it just couldn’t. I’d ask it to be more wavering, to break the rules, but it did a shallow interpretation of that too. Every story was wrapped up in a “then everyone became friends and were happy”-style.

However, when writing a corporate HR letter threatening employees to discuss the CEOs inappropriate behavior at the holiday party, as well as writing an inspirational LinkedIn post, it was indistinguishable from the real deal, on first try.



Not sure if you tried this out on any other model, but I find that a lot of the tonal issues are pretty specific to ChatGPT. An untuned davinci-003 doesn't feel the same need to wrap things up happy every time, and produces much better imitations of style. There's still the usual LLM issues (slow divergence from whatever the prompt was, occasional loops, a sort of dreamy quality to every narrative), but ChatGPT always writes in the bland style of a corporate memo because it was deliberately retrained to do that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: