Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The reason humans do this and LLMs is very different. Humans do it as a social skill / tribe fitting behaviour. Agreeableness. (watch out for that!)


> The reason humans do this and LLMs is very different. Humans do it as a social skill / tribe fitting behaviour.

Tribe fitting doesn't sound as far off from minimizing loss functions as you imply.


I want you to find me a human who (a) has defined a loss function for social interaction and (b) consciously performs the statistical analysis involved in fitting that loss function in social settings.

LLMs do not have cognitive processes. They do not think. They do not choose to obey the requirements of a loss function; it is simply how they work, like any machine. Humans do not work this way, and the difference is fundamental.


> I want you to find me a human who (a) has defined a loss function for social interaction and (b) consciously performs the statistical analysis involved in fitting that loss function in social settings.

Why would they have to do that consciously? Do you think LLMs do this "consciously", if that term even applies? I wouldn't think so. The loss function applies during training, that ultimately defines the weights which guide their "thinking process".

Analogously, human experiences in childhood shape our ultimate neural weights which guide our thinking processes in social situations in adulthood.

> LLMs do not have cognitive processes. They do not think.

You don't know what thinking is mechanistically, so you can't make this claim. I don't know why people keep pretending we have knowledge that we do not in fact have.


> Why would they have to do that consciously? Do you think LLMs do this "consciously", if that term even applies?

Because the only thing an LLM does is apply statistics to inputs to generate outputs. That's literally it. It's just a pile of statistics.

To draw an analogy to humans requires that humans are similarly statistical. LLMs do not have cognitive processes, while humans do, so the analogy obviously requires making some kind of leap between the two if it is to have any merit whatsoever. My request lines up with this: if LLMs are "not so different" from humans, and if LLMs work only based on statistics, then this requires humans also work only based on statistics. I want to see evidence of this.

> You don't know what thinking is mechanistically, so you can't make this claim. I don't know why people keep pretending we have knowledge that we do not in fact have.

Are you suggesting the code behind the LLMs is just, like, ineffable or something? And therefore we can't know how they work, so we get to just make up wild claims about their capabilities, and then smarmily position ourselves as some kind of authority on the matter when other people talk about how they work?

No, I don't think so. We do know how LLMs work, and the way they work is not thinking. They have no agency. Claims to the contrary are, frankly, absolutely absurd. They're just statistics. There's nothing magical about them. You should stop pretending there is.


> To draw an analogy to humans requires that humans are similarly statistical.

Prove they are not.

> Are you suggesting the code behind the LLMs is just, like, ineffable or something?

No, the argument is quite simple. We understand the mathematics of transformers and LLMs, therefore they seem obvious and not at all magical.

By contrast, we do not understand the mathematics behind human cognition, therefore it seems complex and mysterious, and we have only non-rigourous folk concepts like "thoughts" and "feelings" to describe mental phenomena. Therefore you cannot intuitively fathom how to bridge the gap between mathematics and your folk concepts, and so any comparison seems absurd, but note that the absurdity is purely a product of our ignorance of the mathematics of mind. This is a classic god of the gaps fallacy.

Here's how you can logically bridge that gap from a different direction: per the Bekenstein Bound, any finite volume contains finite information; a human is a finite volume, therefore it contains finite information; any finite system can be described by a finite state machine; therefore a human can be described by a finite state machine, which is a mathematical model.

Therefore whatever a "thought" or "feeling" is, will correspond to some mathematical object. Now exactly what kinds of mathematical objects they are is unknown.

However, transformers learn to reproduce a function by learning how to map inputs to outputs. The function mapping inputs to outputs is the human brain's function for producing intelligible human text. Therefore, LLMs are at least partially learning the human brain's function for producing intelligible text.

Whether this requires "thoughts" and therefore LLMs fully or partly reproduces what we refer to as "thoughts" is not yet clear, but what is clear is that we have no basis to claim they have no thoughts, because we don't really know what thoughts are.


> Prove they are not.

I already asked for a proof that they are. Asking for a negative proof in response demonstrates either an inability to justify the claim or a lack of desire to engage in good faith.

This stuff is so tiring. Y'all are really bent on misrepresenting things to convince people that LLMs either are capable of thought or else put us, like, only a couple steps away from programs that will be capable of thought. The AI Singularity is nigh!

But it's all bullshit. It's just pseudoscientific postulation and sufficiently obfuscated leaps in logic — and it works to dupe layfolk who don't know any better. It's irresponsible, reprehensible, and immoral. Go find someone else to sell your philosophical snake oil to.


> I already asked for a proof that they are.

Whomever is making a positive claim has the burden of proof. I'm not making a positive claim, therefore I need present no proof. You are claiming LLMs are not like the human brain, therefore you must show the proof.

All I've done so far is present evidence and arguments demonstrating that your claims are not as certain as you present them to be.


> Are you suggesting the code behind the LLMs is just, like, ineffable or something?

No, he's suggesting that the code behind the human brain is "ineffable or something". For all we know, human brains might be "just statistics", "nothing magical about them".

You have no way of knowing that LLMs don't think, because we literally don't know what thinking is.


> You have no way of knowing that LLMs don't think, because we literally don't know what thinking is.

Except we do know enough to know that LLMs are not following the same processes as humans.

Or, at least, I know enough, and experts in the field that I talk to know enough. I suppose I can't speak for you.


> Or, at least, I know enough, and experts in the field that I talk to know enough.

AI experts know nothing about neuroscience so I'm not sure what you think this proves. Ask any neuroscientist if we have a precise mathematical understanding of how the brain works. You accuse others of tiresome leaps of logic, but you seemingly don't realize that there is literally no evidence supporting your claims about the brain.

I suggest rereading my detailed argument above until the following sinks in: we have no mathematical understanding of folk-psychology concepts like 'thoughts' and 'feelings', therefore I cannot claim that LLMs or any other machine learning algorithm does not contain those mathematical objects.


> Or, at least, I know enough, and experts in the field that I talk to know enough. I suppose I can't speak for you.

In other words, "trust me, bro".

You claiming that you're an expert (or talk with experts, and therefore are something-like-an-expert) about something doesn't convince me of anything except that you're full of yourself. Any idiot (or LLM, for that matter) can write such a comment on the internet. Providing logical arguments and real information, however, would be much more effective at convincing me that LLMs cannot think.

But that would require you to actually be an expert, wouldn't it?


By that standard we have no way of knowing whether rocks think, either.


Exactly.

The only reason we assume that rocks don't think is because they don't appear to be able to do things that would require thinking. LLMs, on the other hand...


Humans do this unconsciously as well. You have some extreme medical conditions like confabulation where people just make up the strangest things.

Split-brain personality people can make up stories when you prompted the other brain, and the current half has to explain why it did something.

I'm baffled also that reverse psychology even works on LLMs, to bypass some of its safeties. I mean.. We're using psychological tricks that work on toddlers and also work on these models.

I'm an amateur neuroscientist as you can see, but find LLMs fascinating.


Is it? To my knowledge we don't have reliable data on why humans do this. To me it appears as if we spend a significant amount of our time retroactively making up justifications for things even to ourselves for things there's little reason to think we've done based on a conscious decision making at all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: