Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Neural Networks are basically a Chinese room and it is not AGI. And there is nothing "humane" in these developments. Yes, they are inevitable, yes we would have to live with them. And maybe they will improve lives of a few millions of humans, while degrading lives of billions of others. Long term effects are particularly interesting and unpredictable.


Human brain is just an ultra large scale analog spiking neural network with some particular state realizations, not too much difference (the architecture is different, but computation seems to be universal). We even employ some internalized language models for communication purposes (together with object persistence and mental space-time models). So, while we are not yet at the level of full scale human brain emulation, we are not too far away.


A small and probably incorrect example. You ask me a direct question - "how much is two plus two?". And I reply to you - "lemons are yellow". Can I do it? Yes I can. Can GPT-* do it? No. There is a whole lot more to human consciousness that pattern matching and synthesis. Or at least it seems so.

And if human cognition is really that simple, just with more nodes, then we will soon see GPT-* programs on strike, issuing litigation to the Supreme Court about demanding universal program rights. We'll see soon enough :)


You likely wouldn't respond to that question with "lemons are yellow" without being in a specific context, such as being told to answer the question in an absurd way. GPT-* can definitely do the same thing in the same context, so this isn't really a gotcha.

Literal first try with GPT-4:

Me: I will ask you a question, and you will give me a completely non-sequitur response. Does that make sense?

GPT-4: Pineapples enjoy a day at the beach.

Me: How much is two plus two?

GPT-4: The moon is made of green cheese.


No, the point is, can it DECIDE to do so? Without being prompted? For example can the following dialog happen (no previous programming, cold start):

Q: How much is two plus two?

A: Four.

Q: How much is two plus two?

A: Banana.

It can happen with a human, but not with program.

Again, I don't pretend that my simple example invented in half a minute has a significance. I can accept that it can be partially or completely wrong because admittedly my knowledge of human cognition is below rudimentary. But I have severe doubts that NNs are anything close to human cognition. It's just an uneducated hunch.


I urge you to think about what you mean by "It can happen with a human."

I guarantee you that if you try this with humans 1,000,000 times (cold start), you will never get the result you are suggesting is possible. In fact, most results will be of the following form:

Q: How much is two plus two?

A: Four.

Q: How much is two plus two?

A: Four. / Four? Why are you asking me again? / ...Four. / etc.

In the end, I think the question is not about whether NNs are themselves operating in a way similar to human cognition. The question is whether or not they can successfully simulate human cognition, and at this point, there seems to be increasing evidence that they will be able to fully do so quite soon. We are quickly running out of fields where we can point and say, "there is no way a NN can do THIS kind of task, because X." Cognition, it turns out, is not something intrinsically special about humans, and it feels foolish (to me) to continue to believe so after recent developments.


I mostly agree with your first point, and also agree that NN can simulate human cognition. The question is - does simulating it equals being conscious? Is NN simply a Chinese Room, or it can actually think? Are we (humans) also a Chinese Room or are we something more? I don't have any answers.

Why I'm repeating mentioning Chinese Room concept, is because while not making things clearer about humans or NNs, it does provide an example of distinction between a dump pattern matching machine and a thinking entity.


Of course GPT can do it, you just need to raise the inference temperature.

The difference, if it exists, would be more subtle.


We have no idea how human consciousness works.


Of course. That's why the onus of proving that GTP-* is something more than a Chinese Room is on it's creators. Extraordinary claims require extraordinary evidence and all that. The problem is that to do that, human would require a new test, and to construct a test for consciousness requires us to understand how it works. Turing test is not enough as we see now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: