Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wouldn't describe most of what's in that interview as "very concrete technical" terms - at least when it comes to other people's research programs. More importantly, while it's perfectly reasonable for LeCun to believe in his own research program and not others, "this one lab's plan is the one true general-AI research program and most researchers are pursuing dead ends" doesn't seem like a very sturdy foundation on which to place "nothing to worry about here" - especially since LeCun doesn't give an argument here why his program would produce something safe.


You can ignore that, of course he'll push his research. But he never says that what he does will lead to AGI. He's proposing a way forward to overcome some specific limitations he discusses.

Otherwise, he makes some perhaps subtle points about learning hidden variable models that are relevant to modern discussions about necessarily learning world-models in order to best model text.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: