Hacker News new | past | comments | ask | show | jobs | submit login

I am by no means an expert. The way I think about it, gradient descent is a shotgun learning approach, whereas comparatively speaking a parent/guardian/teacher/peer is able to pinpoint with precise accuracy how you are doing something wrong, why it is wrong, how to change, and how much to change. The evolutionary learning argument doesn't pass the smell test for me, but when you consider that society and human to human interaction itself has evolved combined with our ability to communicate an idea, you get faster learning. I think ChatGPT etc. has proper idea representation, but not segmentation or communication. In other words, it is not capable of proper idea retrieval, or of restructuring its architecture of ideas. I think we are stuck on this idea of a mono-training loop when even humans subscribe to at least two training loops (dreaming). I think the reason we haven't gotten results in that area yet is that we are way too focused on iterative optimization schemes (gradient descent). Like I said though, I am not an expert, I might just be hallucinating the state of ML research.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: