Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This group of people may have been the first to mention the words “AI” prominently in academia but is this flag planting or are they truly foundational to the work with the same name today? If none of these people had done anything, would we really be far behind?

My sense is that modern AI has more to owe Fukushima’s neocognitron, Hubel and Wiesel, and the connectionists than any intellectual descendant of the work mentioned here.



Modern AI depends on one thing above all else: The unimaginable compute power provided by today's GPUs.

All theoretical foundations are trivial by comparison. The basic math can be understood by an interested high school student, but unless you are able to do billions of matrix multiplications per second, it doesn't mean anything. AI was going nowhere before Nvidia.

Calling these people "the minds that launched AI" is like calling Archimedes the father of hot-air balloons, because he recognized the principle of buoyancy.


Are you trolling? Minsky, McCarthy, Newell and Simon all went on to win Turing awards for their work, (as later did several other AI luminaries over the decades). And Claude Shannon?

In the mid sixties Minsky and Papert published a paper/book called "Perceptrons" which explored the limit of perceptrons, though it said those limits could be overcome by multilayer networks, if they were ever computationally feasible. And, thanks to Moore's law, they now are.

Almost every one of those attendees is a well known scholar whose work you depend on every day outside AI (as well as within it, though nobody really depends NNs yet). Of them, the least known in computing, Ray Solomonoff, might be the smartest of the bunch, though he was never univerity affiliated as far as I know.


> And, thanks to Moore's law, they now are.

Not thanks to "Moore's law". Thanks to the countless engineers who made that happen, and whose work is much more important for today's AI systems than the theory that was cooked up in the 60s and 70s.

Your comment is a typical example of the hero-worship towards theorists, and the casual disregard towards engineers, that is so common in today's science culture.

Any above-average grad student could reinvent the perceptron network from scratch. Good luck having a grad student (or even a Nobel laureate) redesign the H100 GPU from scratch.


You got that wrong, the problem with multilayer networks at the time was not about scaling, but how to train such a network at all.

Using error backpropagation for training multilayer networks was what overcame that problem, not Moore's law, or anything else.


Don't you remember what machines were like back then? The PDP-10 was about a 400 MIPS timesharing computer with an 18-bit address space.

People like Rumelhart kept at it, and eventually the hardware caught up with the requirements.


I know computers were slow, and Moore's law was what made really big networks computationally feasible.

Still, the decisive algorithmic breakthrough for the Perceptron was applying BP to MLPs. Without multiple layers you can't solve problems which aren't linearly separable, and without error backpropagation you can't train multilayer networks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: