Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The reason is that writing yourself is a critical thinking tool. It helps you work through the logic and arguments and has benefits well beyond just the content that gets put down. It's the journey, not the destination that matters!

Also, don't outsource your thinking to AI or the media (mainstream & social)



I'd give an extra reason. ChatGPT (both 3.5 and 4) seems to be doing an amazing job writing things for us, but that's strongly conditioned on everyone being able to write on their own.

When I let GPT-4 write text for me (parts of e-mails or documentation), I rely on my own writing skills to rate GPT-4's output. The problem I'm solving isn't "I can't write this". I can. It just requires putting in effort I'd rather avoid, and it's significantly easier to criticize, edit or get inspired from existing text, than to come up with your own from scratch.

People who decide to rely on LLMs instead of learning to write will face many problems, such as not being able to express what the LLM should write, and not being able to quickly evaluate if the output is any good.

Of course, if we keep the current rate of progress, few years from now the world will be so different that it's hard to guess how communication will look like. Like, I can imagine GPT-6 or GPT-10 being able to write perfect text based on contextual awareness and me making a few grunts - but if this becomes ubiquitous, we'll have bots writing texts for other bots to translate back into grunts, and... it's really difficult to tell what happens then.


I'm actually not sure this is true. I think in general it is easier to evaluate a work of art, writing, etc. than it is to create that thing. I don't think we know yet whether kids who are allowed to use chatgpt to their hearts' content will not learn how to write, or know what good writing is. My suspicion is that they will still be able to tell good writing from bad, and they may learn how to prompt a chatbot to tweak output to make it better.

I think what they will really lose is the perseverance that it takes to write, and various other skills that are made possible by being a skilled writer. They may gain other skills along the way, of course. But it's a big unknown.


Evaluating existing work is often MUCH harder than creation. It often 'feels' easier because most people do a terrible job and offer little more than their opinions. Some intuition: have you ever eaten something you really disliked but you were able to tell that its actually very good because you understood that not all tastes match yours and you can evaluate the various techniques and elements that went into creating the dish? Perhaps more relatable intuition: finding bugs and flaws in a large code base is much hard than writing the code to start with.


Okay, let me tweak my assertion a bit. I agree that evaluating is easier than creating - that is the basis of my assertion too. And I agree that in general sense, you don't need to write to tell if what you're reading is any good, for the purpose of making you informed or entertaining you.

However, when you're using ChatGPT to write things for you, you are in a different position. First of all, you're not the audience - you're the editor. This will make your determinations different, as you're re-evaluating a work you're iterating on, and core points of which you're already familiar with. The same applies to writing on your own (where you're both the writer and the editor for yourself), but writing forces you to focus much more on the text and thoughts behind it, which I believe teaches you to make more nuanced determinations, with much greater fidelity.

Secondly, when you're writing - especially if you bother to enlist the help of an LLM - you usually have a specific goal in mind. Writing a love letter, or an advert, or a business e-mail, or a formal request, etc. requires you to think not just in terms of how it "feels" to you in generic sense of entertainment or knowledge transfer value. You need to put some actual thought into the tone, structure and the wording. Writing, again, forces you to do that.

I can get away with letting GPT-4 write half the text in my correspondence at work (INB4 yes, I have temporary access to company-approved model instance on Azure), because I've written such text myself so many times that I know exactly whether or not GPT-4 is doing a good job (I usually end up having it to do two or three variants, and then manually mix sentences or phrases from each). Even with that experience, I still consider it to be a tool for overcoming mental blocks associated with context switching - I wouldn't dare just having it both generate and send e-mails for me. People who never got the experience of writing specialized correspondence, I just don't think they'd be able to tell if the output of an LLM is actually suitable for the goal one wants to achieve.


It feels like some limited p vs no problem -- I know p, but I want the ai to try the np solutions so I can check it with p


I'm sure I've seen that exact scenario on SMBC, but I can't seem to find it now…


ChatGPT is arguably a better tool for thinking than writing on a text editor, though.


It certainly has its place, but there's also a temptation to press the button instead of thinking.

Seen a few "as a large language model" reviews on Amazon a few months back; now the search results are for T-shirts with that phrase printed on them, and I don't know if that's because people are buying them or because an unrelated bot generates new T-shirts every time it notices a new catchphrase.


Probably a person who doesn’t think with chatGPT won’t be thinking through writing either? I don’t think I’m thinking less with chatGPT and I don’t think my 13 year old is thinking less either. It’s quite thought demanding, actually…


What is thought demanding about it?

I feel like I spend more time trying to coax it into staying focused than anything else. Not where I want to spend my time and effort tbh


Evaluating the veracity and relevance of everything it says. Reflecting on what it’s given me and determining whether it meets my objectives. And then the topics I use it for are thought demanding!

If you are using it for marketing copy, that’s one thing. I’m using it to think through some very hard topics — and my kid is trying to learn how photosynthesis works atm.


As I understand it, these models respond at the same sort of level as the prompts; writing like you're a kid, get a simple reply, write like a PhD and get a fancy one.

"Autocomplete on steroids" has always felt to me like it needlessly diminished how good it is, but in this aspect I'd say it's an important and relevant part of the behaviour.


The issue I am talking about is not about prompting, but a limitation of the models and algorithms below this layer. Prompting only exists because of the chat fine-tuning that happened at the later stages


Reading and writing have served humanity well.

We can see the impact of outsourcing thinking in modernity, via the simplicity of likes and retweets.

While ChatGPT can be a helpful tool, the issue is that many will substitute rather than augment. It is a giant language averaging machine, which will bring many people up, but bring the other half down, though not quite because the upper echelons will know better than to parrot the parrot.

Summarizing a text will remove the nuance of meaning captured by the authors' words.

Generating words will make your writing blasé.

Do you think ChatGPT can converse like this?


One might entertain a contrary perspective on the issue of ChatGPT. Rather than being a monolithic linguistic equalizer, it could be seen as a tool, a canvas with a wide spectrum of applications. Sure, some may choose to use it as a crutch, diluting their creativity, yet others might harness it as a springboard, leveraging it to explore new ideas and articulate them in ways they might not have otherwise.

Consequently, the notion that ChatGPT could 'bring down' the more skilled among us may warrant further scrutiny. Isn't it possible that the 'upper echelons' might find novel ways to employ this tool, enhancing rather than undermining their capabilities?

Similarly, while summarization can be a blunt instrument, stripping away nuance, it can also be a scalpel, cutting through verbosity to deliver clear, concise communication. What if ChatGPT could serve as a tutor, teaching us the art of brevity?

The generated words may risk becoming 'blasé', as you eloquently put it, but again, isn't it contingent on how it's used? Can we not find ways to ensure our individual voice still shines through?

So, while I understand and respect your concerns, I posit that our apprehensions should not eclipse the potential that tools like ChatGPT offer us. It might not just be a 'parrot' – but a catalyst for the evolution of human communication.

Though I'm hoping you didn't suspect it, I should warn you this comment was written by you know what (who?).


Ironically, this comment is better written than nearly all others under this post. I take LLMs to be net positive contributors to literary expression.


What is better about the writing? What about the argumentation?


Did AI augment your thinking on the matter or did it do the thinking for you?


You turned up the “smart” knob too high, clocked it at sentence 3, but a hearty +1 from me


I enjoyed reading. May I ask what you used as a prompt?


This is word soup just for the sake of using lots of fancy words. Be more concise ChatGPT... Bard is often better here

If ChatGPT did write this, as you allude to, then you didn't check your work. These counter arguments are distracted and irrelevant at times...

> Rather than being a monolithic linguistic equalizer

This has very different meaning than "language averager", from words to model (during training), vs linguistic equalizer, model to words (after training)

> it could be seen as a tool, a canvas with a wide spectrum of applications.

Yes, ofc, but we are talking about writing specificly, this is trending towards whataboutism.

> Sure, some may choose to use it as a crutch, diluting their creativity, yet others might harness it as a springboard, leveraging it to explore new ideas and articulate them in ways they might not have otherwise.

This is the point, not contrary to what has been said. The issue is with the crutch users. We know many people do this, yet this topic is barely mentioned, let alone addressed as the core of the discussion.

> ... the notion that ChatGPT could 'bring down' the more skilled among us ... Isn't it possible that the 'upper echelons' might find novel ways...

That is what I said

> but again, isn't it contingent on how it's used?

again, this is what I said, not reflecting this shows how limited this reply is in debate and argumentation

> What if ChatGPT could serve as a tutor, teaching us the art of brevity?

More whataboutism, irrelevant to the more focused discussion at hand

> So, while I understand and respect your concerns, I posit that our apprehensions should not eclipse the potential that tools like ChatGPT offer us. It might not just be a 'parrot' – but a catalyst for the evolution of human communication.

Sam might disagree here... though I do not completely. Why did it switch to "our" all of the sudden?

Not sure where I said it, but I have put forth the idea that it could, _could_, improve communication for many, by acting as a filter or editor. Again the issue at hand is that _many_ will not use it as a tool but as a replacement, there are many lazy people who do not want to write and will subsequently not critically read the output...

---

>> the issue is that many will substitute rather than augment

This is the core statement of my argument, "many" has been interpreted as something more, not partial. That it is lost within the reply is not surprising... a distracted and largely irrelevant word soup

In summary, this is the low quality writing one might come to expect from ChatGPT primary output, assuming the allusion is correctly interpreted... be clear if you use it

And sibling comments show that lack of critical reading and fawning for anything ChatGPT, whether it was or was not, people are assuming so based on your last ambiguous sentence.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: