Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why couldn't deep neural nets scale to AGI. What is fundamentally impossible for neural nets + tooling to accomplish the suite of tasks we consider AGI?

Also prompt engineering works for human too. It's called rhetoric, writing, persuasion, etc. Just because the intelligence of LLM's is different than humans doesn't mean it isn't a form of intelligence.



> Why couldn't deep neural nets scale to AGI

Speaking as a former cognitive neuroscientist, our current NN models are large, but simpler in design relative to biological brains. I personally suspect that matters, and that AI researchers will need more heterogeneous designs to make that qualitative leap.


Seems like that's happening with mixture of experts already. Not sure you are presenting an inherent LLM barrier.


Hinton's done work on neural nets that are more similar to human brains and no far it's been a waste of compute. Multiplying matrices is more efficient than a physics simulation that stalls out all the pipelines.


Doing the wrong thing more efficiently doesn't make it into the right thing.


Fair point. I guess nobody knows yet, and it's also worth a shot. In the context of AI alignment, I don't see strong evidence to suggest deep neural nets, transformers, and LLMs have any of the fundamental features of intelligence that even small mammals like rats have. ChatGPT was trained on data that would take a human several lifetimes to learn, yet it still makes some rudimentary mistakes.

I don't think more data would suddenly manifest all the nuance of human intelligence... that said we could just be one breakthrough away from discovering some principle or architecture, although I think that will be a theory first and not so much a large system that suddenly wakes up and has agency.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: