Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It seems like few people are referencing the improvements in reliability and deception. If the benchmarks given generalize, what OpenAI has in GPT-5 is a cheap, powerful, _reliable_ model -- the perfect engine to generate high quality synthetic data to punch through the training data bottleneck.

I'd expect that at some level of reliability this could lead to a self-improvement cycle, similar to how a powerful enough model (the Claude 4 models in Claude Code) enables iteratively converging on a solution to a problem even if it can't one-shot it.

No idea if we're at that point yet, but it seems a natural use for a model with these characteristics.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: