Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This has been my suspicion for a long time - OpenAI have indeed been working on "GPT5", but training and running it is proving so expensive (and its actual reasoning abilities only marginally stronger than GPT4) that there's just no market for it.

It points to an overall plateau being reached in the performance of the transformer architecture.



That would certainly reduce my anxiety about the future of my chosen profession.


but while there is a plateau in the transformer architecture, what you can do with those base models by further finetuning / modifying / enhancing them is still largely unexplored so i still predict mind-blowing enhancements yearly for this foreseeable future. if they validate openai's valuation and investment needs is a different question.


Certainly hope so. The tech billionaires are little to excited to achieve AGI and replace the workforce.


TBH, with the safety/alignment paradigm we have, workforce replacement was not my top concern when we hit AGI. A pause / lull in capabilities would be hugely helpful so that we can figure how not to die along with the lightcone...


Not sure how why anyone thinks it's possible to fully control AGI, we cant even fully tame a house cat.


Is it inevitable to you that someone will create some kind of techno-god behemoth AI that will figure out how to optimally dominate an entire future light cone starting from the point in spacetime of its self-actualization? Borg or Cylons?


I feel like this period has shown that we're not quite ready for a machine god. We'll see if RL hits a wall as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: