Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
Kuinox
7 months ago
|
parent
|
context
|
favorite
| on:
GPT-5 is behind schedule
Wait a few month and they will have a distilled model with the same performance and 1% of the run cost.
peepeepoopoo98
7 months ago
|
next
[–]
100X efficiency improvement (doubtful) still means that costs grow 200X faster than benchmark performance.
achierius
7 months ago
|
prev
[–]
Even assuming that past rates of inference cost scaling hold up, we would only expect a 2 OoM decrease after about a year or so. And 1% of 3.5b is still a very large number.
popcorncowboy
7 months ago
|
parent
[–]
And to your point "past performance is not indicative of future results". The extrapolate to infinity approach is the mindfever of this field.
Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: