Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Humans don't seem to need anywhere near the same level of data or training that our current models need.

I find this to be a common misunderstanding. If I show you one Stirch Wrench, and you've never seen one before, you learn instantly and perhaps for the rest of your life you'll know what a Strich Wrench is. The problem is I didn't show you 1 example. You saw perhaps millions of examples (your conscious process filters those out, but in reality think of the slight shaking of your head, the constant pulsing of the light sources around you, etc, to be augmenting that 1 image with many examples). I think humans are indeed training on millions of examples, it's just we are not noticing that.

> That alone is a sign that deep learning may not be enough.

I 100% agree with that. It's going to take improvements in lots of areas, many unexpected, but I think the deep learning approach is the "wings" that will be near the core.



I think what you're terming a misunderstanding is actually fairly well known, but doesn't account for the magnitude of the sitution.

Here's a great article about a paper showing that humans prior knowledge does help with learning new tasks- https://www.technologyreview.com/s/610434/why-humans-learn-f...

However, that doesn't account for how quickly toddlers learn a variety of things with a small amount of information. Even more important, you can also just look at things like AlphaGo- they train on more examples than could be accumulated in a hundred human lifetimes.

For these reasons I don't believe "more data" and "more training" is the answer. We're going to need to do a lot more work figuring out how humans manage recall, how we link together all the data, and I would be surprised if this didn't involve finding out that our brain processes things in ways that are far different than our current deep neural nets. I don't believe incrementalism is going to get us to AGI.


I’m always puzzled at this idea that humans, at whatever age, are learning things with a small amount of information. The full sensory bandwidth of a baby from pregnancy to toddlerhood seems huge to me. I suspect that helps, as does the millions of years it took to create the hardware it all runs on.


I don’t believe incrementalism will get us their either. We need many more 10x+ advances. But I think it’s relatively clear where those advances need to be. I think simply by making 10x advances in maybe 100 or 1k domains we’ll get there. Neuralink for example, just announced many 10x+ advances, such as the number of electrodes you can put in the brain. Our lab is working on a number of things that will be also 10x advances in various sub domains.

Lots of advances in many fields will lead to something greater than the sum of their parts.

Edit: p.s. I like your comment about toddlers. As a first time father of a 6 month old, its been very intellectually interesting watching her learn, in addition to just being the greatest bundle of joy ever :)


I think that the lack of a hundred or a thousand 10x advances (you may be more pessimistic than me) does not merit calling your work pre-AGI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: