Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Pretty arrogant to call it a comical failure when OpenAI is openly stating they're not training GPT 5. I think people who think we can do much better than GPT 4 without insane cost scaling and a nuts amount of data (that's going to get harder to collect when regulations start to kick in), probably don't know what they're talking about. We're deep into diminishing returns here.


I do think we can do much better than GPT4. Size isn't everything. Small models can outperform large models, when trained and finetuned in the right way. And transformers are hardly the be-all and end-all of language AI either - there's plenty of reason to believe they're both inefficient and architecturally unsuited for some of the tasks we expect of it. The field is brand new, and now that the ChatGPT has brought the world's figurative Eye Of Sauron on developing human-level AI, we're going to see a lot of progress very quickly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: