Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Am I the only one who thinks this is a really bad idea?

By offloading your cognitive tasks to an AI, even though you now look smarter, you're becoming dumber in the long run, because you're never really challeging and exercising your intelect. This book[1] goes into a lot of detail about how rote memorization and recall is essential to critical thinking (you have a limited working memory, and the way by which you're able to critically think about complex subjects is by chunking, which only works with concepts you've previously memorized). If you just stop exercising your recall and critical thinking, they'll get weaker and weaker.

I feel that already with ChatGPT. Before, whenever I needed to learn some programming concept, I'd have to search vast amounts of resources to learn it. By being exposed to many different points of view, I always felt that what I had learned stuck with me for much longer. If I just ask ChatGPT, I get the answer faster, but I also forget faster. It's not learning.

Learning, with capital L, is not supposed to be easy. It's supposed to be hard. Education is about making what is hard a worthwhile pursuit. The people who get lured into thinking they'll be smarter if they plug themselves to the matrix will be shooting themselves in the foot.

For me, relying on OpenAI to function cognitively is like relying on Google to turn my lightbulb on. It looks cool, but it doesn't make any sense.

[1] https://www.goodreads.com/book/show/4959061-why-don-t-studen...



I haven't formed a definite opinion on this, and I think I largely agree, but what about a counterargument like this one:

Virtually no one does long division manually anymore, or really any basic arithmetic greater than two digits, because we invented pocket calculators and smartphones that do this for us. And are we any worse mathematicians or engineers because of this? If anything, this has freed us to perform more higher-order reasoning.

And so with these kinds of "AI" assistants, is it possible that the types of reasoning that we offload onto them will free us to reason in even higher orders?


Well we're pretty confident that calculators work and do so in a fairly deterministic manner.

ChatGPT tends to be extremely incoherent and often provides answers which directly disagree with what it previously said (at least on some topics*). My fear is that while you right in theory we'll have spend huge amounts of brain power and time to discern whether what it's saying is total BS or not. And I really don't know how could I even do that if I wasn't particularly knowledgeable on the topic.

If it could provide citations or some context on why did it decide to answer in the way it did it might be not so bad.

Fairly straightforward areas like software engineering are not that bad I guess.. but it's answers any even mildly complex questions on history, anthropology or related fields where there are often no clear and straightforward answer just seem absolutely awful. Just tweaking the input a bit without actually changing the core of the question can results in something that completely contradicts to what it just said before.


Socrate didn't want to write anything down because he thought it would make you stupid too.

Though maybe he might be right as well...


If you think of cognitive tasks as a hierarchy it makes more sense. It's a big task to think through and plan an essay, but it's a little cognitive task to check your grammar and citations. If you can get an LLM to do the little things, you can practice the higher level stuff.

I guess the question is whether you are actually learning the higher level stuff by getting help with the lower level stuff. I think on some level you would, like how having a calculator when you're doing higher level math helps you think about the problem rather than the details.


I recently asked an AI for help writing a python program with a mutex to prevent outdated information from being accessed. It presented me with a solution that I didn't fully understand, so I asked it to explain that part of the code. And then I asked it to explain part of the explanation. It just kept answering, never getting tired or irritated with me asking for clarification, and generating information that couldn't exist in a book or a blog post. It reminded me of the primer in "The Diamond Age". It catered the answers to my needs and deficiencies instead of making me adapt to it.


> information that couldn't exist in a book or a blog post

I think that exists somewhere. Maybe not something that is specific to situation, but there is enough information out there that can lead you easily to the solution.

That’s why I prefer research instead of this AI assisted workflow. Instead of giving me the specific knowledge I need, it lets me know what I didn’t know and give me much more information to reason about.


I think your point about research is valid, but once that is done and you have a rough understanding of a solution, you just want things to work. You dont want a bunch of ways to do one thing, you want a single opinionated solution. Part of the information encoded in these models is from trainers deciding which answer is best, and that is information itself which isnt necessarily online.


Interesting, this isn't the only life changing invention. Riding horseback was one of them. Cars, then GPS. Internet instead of books. Social sites instead of in-person skills. And so on.

The main danger of AI 'experts' is, in my opinion, the information bubble they create. They not only suggest, eventually they will drive your thoughts. In the direction their builders want. That will be crowd control even worst than Facebook was at it's peak. Currently we know that ChatGPT has woke bias embedded. But that's not the end, right?


It has already happened with people's sense of direction. Lots of people have no idea where they are when they are driving due to being completely dependent on GPS. I see so much lack of situational and spatial awareness on the road. Since people don't know where they are and where they are supposed to go, there's a lot of last moment lane switches to make an exit etc. Lots of very bad decision making because of being continuously lost.


You can still seek challenge, it just has to be more ambitious and further out at the edges now.

A challenge closer to leading a team of researchers rather than plugging along alone at a problem.


I agree with you. In a more extreme example, I wish I never offload my address book to my phone, I can hardly remember a single phone number in my head now!


Would you have remembered every phone number if it were in a Rolodex?

It’s a skill you have to exercise no matter where you keep contact information. Same deal with outsourcing directions to GPS-using maps apps: you can still maintain a basic sense of direction and how to navigate a city without an app as long as you make it a point to do so.


Counterpoint/related: LLMs penalize those who spent most of their education memorizing things/doing rote learning and encourage actual thinking.


Honestly when I read comments about how LLMs will make us all dumb, all I can think of is Steve Jobs telling people they're holding it wrong.

There hasn't been a subject that's gotten me thinking as deeply as LLMs in a minute, and I don't even work at the implementation level.

Just coming up with novel ways to use them is a delightful brain exercise that requires ways of thinking that you don't normally exercise just writing code. And since I started interacting with ChatGPT for example, primarily through APIs rather than the web interface, I've started to scratch a mental itch that "normal" programming hand long since stopped for.


> encourage actual thinking

You mean when there is an X chance that the answer it provided is BS but in a subtle non immediately obvious way and you have to spend some amount of time 'actually thinking' before you figure it out?


Not a copilot, an auto pilot. Most people will not be able to resist the temptation.


It's much like relying on Google Maps for directions. When you want to practice your navigation skills, you're free to not use it.

How much practice you need depends on what you're interested in learning to do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: