Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What did he get right? While Hinton, LeCun, and others were laying the foundations of deep learning in the 80s, Hofstadter was writing expert systems in Lisp.


Just for the record: Hinton was a Lisp user at some point in time. 1984 he got a Symbolics 3600. LeCun co-wrote Lush, a specialized Lisp for numerical computing. https://lush.sourceforge.net


That you have to reach for such obscure examples kind of illustrates my point, no? Geoff Hinton isn’t known for ”using Lisp at some earlier point in time.”

It’s not my goal to denigrate Lisp. I think Lisp is great. I learned programming with Dr Racket. But Hofstadter’s contribution to our understanding of intelligence is somewhere between ”negligible” and ”counterproductive.”


One could say that same about Hinton and LeCun.


Are you joking? Learning nonlinearities by hidden representations ranks among the most important scientific discoveries of the past fifty years.


How does it relate to intelligence though?

Certainly it's very useful for training ML models but their relationship to intelligence has yet to be determined.


The idea that knowledge can be expressed in distributed representations rather than sequences of symbols is a huge advancement in our understanding of intelligence. Obviously we haven’t ”solved intelligence” but now we at least have some idea of what the right questions to ask are.


Can you share a link to the paper please? I feel like I get you, but would like to disambiguate with the mathematical representation.


I’m not talking about a single paper, I’m talking about a whole research programme. But Rumelhart, Hinton, Williams 1986 is very improtant. A lot of the foundational work is collected in the PDP report.


> But Rumelhart, Hinton, Williams 1986 is very improtant. A lot of the foundational work is collected in the PDP report.

Thanks!

OK, so this is basically a connectionist model of mind approach?

I can definitely see this as an ancestor of current neural network approaches, and I now have some idea of what you mean by distributed representations.

I mean no disrespect here, but this has not contributed to our understanding of intelligence at all. It's proved useful in getting large datasets to perform certain actions (like vision and speech), but those are not necessarily the same thing at all.

It's a massive, massive advancement in the field of statistics and learning from data, but doesn't seem to map to my conception of intelligence at all.

(as you may have guessed, I'm sceptical that statistical learning approaches will lead to human-level intelligence).


> It's a massive, massive advancement in the field of statistics and learning from data, but doesn't seem to map to my conception of intelligence at all.

You think ”learning from data” has nothing to do with ”intelligence” ?


Links would be greatly appreciated. I'm definitely not familiar with NN work from the 80s.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: