Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I used it for writing assistance and Plot development. Specifically, a novel re: the conquest of Mexico in the 16th cent. It was great at spitting out ideas re: action scenes and even character development. In the past month or so, it has become so cluttered with caveats and tripe regarding the political aspects of the conquest, that it is useless. I can’t replicate the work I was doing before. Actually cancelled my $20 subscription for GPT4. Pity. It had such great promise for scripts and plots, but something changed.


I fear we have to wait for a non-woke (so probably non-US) entity to train a useful GPT4(+) level model. Maybe one from Tencent or Baidu could be could, provided you avoid very specific topics like Taiwan or Xi.


Then we can use LLMs benchmarks to benchmark freedom of speech.


writing "assistance", lol


It is a useful tool for editing. You can input a rough scene you’ve written and ask it to spruce it up, correct the grammatical errors, toss in some descriptive stuff suitable for the location, etc. It is worthwhile. At least it was…

If your text isn’t ‘aligned’ correctly, it either won’t comply or spew out endless caveats.

I appreciate the motivation to rein in some of the silly 4chan stuff that was occurring as the limits of the tech were tested (namely, trying to get the thing to produce anti-Semitic screeds or racist stuff.) But, whatever ‘safeguards’ have been implemented have extended so far that it has difficulty countenancing a character making a critical comment about Aztec human sacrifice or cannabilism.

I suspect that these unintended consequences, while probably more evident in literary stuff, may be subtly effecting other areas, such as programming. Definitely a catch-22. Doesn’t really matter, though, as all this fretting about ‘alignment’ and ‘safeguards’ will be moot eventually, as the LLM weights are leaked or consumer tech becomes sufficient to train your own.


sprucing up, fixing your mistakes, adding in "descriptive stuff"... that's like 90% of writing. Outsourcing it all to AI essentially robs the purchaser of the effort required to create an original piece of work. Not to mention copyright issues, where do you think the AI is getting those descriptive phrases from? Other authors' work.


I think that you, like I did in the past, are underestimating the number of people who simply hate writing and see it as a painstaking chore that they would happily outsource to a machine. It doesn't help that most people grow up being forced to write when they have no interest in doing so, and to write things they have no interest in writing, like school essays and business applications and so on. If a chatbot could automate this... actually, even if a chatbot can't automate this, people will still use it anyway, just to end the pain.


The original post specifically stated he was utilizing AI for plot development of a novel. Not a school essay or business application.


I should have bookmarked it but there was an article shared on HN, published in the New Yorker I believe, or the LARB, or some such, where a professional writer was praising some language model as a tool for people who hate writing, like themself and other professional writers. I was dumbstruck.

But, it's true. Even people who want to write, even write literature, can hate the act of actually, you know, writing.

In part, I'm trying to convince myself because I still find it hard to believe but it seems to be the case.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: