Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The idea of singularity--that AI will improve itself--is that it assumes intelligence is an important part of improving AI.

The AIs improve by gradient descent, still the same as ever. It's all basic math and a little calculus, and then making tiny tweaks to improve the model over and over and over.

There's not a lot of room for intelligence to improve upon this. Nobody sits down and thinks really hard, and the result of their intelligent thinking is a better model; no, the models improve because a computer continues doing basic loops over and over and over trillions of times.

That's my impression anyway. Would love to hear contrary views. In what ways can an AI actually improve itself?



I studied machine learning in 2012, gradient descent wasn't new back then either but it was 5 years before the "attention is all you need" paper. Progress might look continuous overall but if you zoom enough it might be a bit more discrete with breakthrough that must happen to jump the discrete parts, the question to me now is "How many papers like attention is all you need before a singularity?" I don't have that answer but let's not forget, until they released chat gpt, openAI was considered a joke by many people in the field who asserted their approach was a dead end.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: