Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Difference is, people were able to propose detailed mechanisms for realistic flying machines long before we actually achieved powered flight - that was mainly a matter of increasing the power to weight ratio of the propulsion system. For AGI, do you really think there are detailed proposals out there today that can achieve AGI but are only missing the computational power?

Actually the existence of human brains with their (comparatively) extremely low power consumption, indicates that we need something radically different from current silicon-based processors to achieve AGI.



> For AGI, do you really think there are detailed proposals out there today that can achieve AGI but are only missing the computational power?

Yes, I really do. It's neural networks, nothing more. All that is required is more power. Despite its lower power the brain is much more computationally powerful than even the largest supercomputers.

Although it is not necessary, imo, to have a much more efficient computing substrate to achieve GAI, there is work in this direction. Look into optical reservoir computing. Or more generally thermodynamic computing. https://arxiv.org/abs/1911.01968

Machine learning is progressing rapidly mostly because the computational power is increasing rapidly. Read this for an understanding of how important computational power is to progress in machine learning: http://www.incompleteideas.net/IncIdeas/BitterLesson.html

Time and time again computational power trumps complicated technique. It is fairly obvious to many now, after the continued scaling of the GPT-x series models, that genuine intelligence is an emergent property of the kind of systems we are building. It is an emergent property of systems that are driven to predict their environment - no more secret sauce to discover.

I think a major objection people have to the idea that machines are already a little intelligent is that they cannot understand how intelligence can emerge from pure computation. They imagine that there must be some magic between the operation of neurons in their mind and their experience of the world. Qualia where does it come from, how can it be just neurons, just inanimate physical matter? Combine this philosophical objection with the emotional response to the threatening nature of AGI and you have a milieu that refuses to see what is directly in front of its nose.

There are plenty of philosophical positions that allow you to understand how consciousness might emerge naturally from computation. Consciousness is emergent... Panspermia, consciousness is found in everything. To imagine consciousness in the movement of 1s and 0s within a machine is not too hard, to see the consciousness inside the "chinese room" is not really so difficult.

But none of this is relevant to the original argument I was making which is: trying to tell the future is very hard. Because it is simply a fact that before most great breakthroughs there is a common opinion that such a thing is, if not impossible, then in some distant future.

I think there are very very good reasons to think that a general intelligence of genuine utility is not more than 10 years away. I think people will look back at GPT-3 as the first proto form of the general intelligences that will exist in the future.


> It is fairly obvious to many now, after the continued scaling of the GPT-x series models, that genuine intelligence is an emergent property of the kind of systems we are building.

I respectfully disagree. GPT-x series models are performing interpolation on an unfathomably massive corpus. It is not hard to find cases where it directly reproduces entire paragraphs from existing text. When given a prompt on a topic for which it finds multiple existing texts with similar degree of matching, such as different articles reporting on the same topic, it is able to blend the content of those articles smoothly.

I mean, GPT-3 is around 6 trillion bits of compressed data. The entire human brain has 0.1 trillion neurons, and it obviously has a capacity far beyond GPT-3 - even in the extreme case if we assume all the neurons in the human brain are used for generating English written text.

In my view GPT-x is very, very far from any kind of general intelligence.


> I respectfully disagree

Cool :)

> The entire human brain has 0.1 trillion neurons

You want to be thinking about synapses. There's about 7000 synapses per neuron, so that's 7000 * 0.1 = 700 Trillion synapses. So thats *100 times larger than GPT-3. Also consider that a neuron does a fair amount of processing within the neuron, there is some very recent research on this, each neuron is a akin to a mini neural network. So I would not be surprised if the human brain is 10,000 times more powerful than GPT-3.

> It is not hard to find cases where it directly reproduces entire paragraphs from existing text. When given a prompt on a topic for which it finds multiple existing texts with similar degree of matching, such as different articles reporting on the same topic, it is able to blend the content of those articles smoothly.

This may be true, but it does not prove your hypothesis that all GPT-x models are simply "performing interpolation". Also the ability to perform recall better than a human may be to do with the way that we perform global optimisation over the network, rather than the local decentralised way that the brain presumably works. Point is accurate memorisation does not preclude general intelligence. Spend some time with the models, sit down for a few hours and investigate what they know and do not know, really look, see beyond what you expect to see. You may be surprised.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: