Hacker News new | past | comments | ask | show | jobs | submit login

This article looks like a case of skating to where the puck is. Over the next 2-4 years this will change - the rate of improvement in AI is staggering and these tools are in their infancy.

I would not be confident betting a career on any those patterns holding. It is like people hand-optimising their assembly back in the day. At some point the compilers get good enough that the skill is a curio rather than an economic edge.




There was a step change over the last few years but that rate of improvement is not continuing. The currently known techniques seem to have hit a plateau. It's impossible to know when the next "Attention is All You Need" will materialize. It could be in 2025 or 2035.


o1 sort of techniques (tree search, test-time training type things) have not hit any recognizable plateau. There's still low hanging fruits all around.


This discussion includes o1. It's marginal improvement compared to, say, GPT-3 to GPT-4.


Or 2023... There's a lot of papers out there!


We said the same thing 3 years ago and we still have errors on basic questions. I don’t know where people get their estimation from ? Their intuition ?


I think the tech chauvinism (aka accelerationism) comes from the crypto-hype era and unfortunately has been merged into the culture wars making reasonable discussion impossible in many cases.


"Culture wars" aside, how is it different from the turn of the millennium dot-com bubble hype accelerationism ?

EDIT : http://www.antipope.org/charlie/blog-static/fiction/accelera...

Especially the first 3 chapters, set in the near future (which is ~today now).


So if something takes more than 3 years it doesn’t happen?

Models have been getting better, at a fast clip. With occasional leaps. For decades.

The fact that we are even talking about model coding limitations greatly surpasses expectations for 2024 from just a few years ago.

Progress in steps & bounds isn’t going to stop short of a global nuclear winter.


You made a very specific time prediction. Claiming something will happen eventually is an entirely different thing.

We all expect just about every technology to get better eventually, but you may notice some things seem to be taking decades.

Edit: realized I replied as if Nevermark was the source of the first post, so just note that's not the case.


Yes that was exactly my point. For AI to get there ? Sure. But how do they throw out a specific time prediction ? 2-3 years is specific. I mean it’s so specific that companies could make strategic decisions to incorporate it faster and there is a huge price to pay if it reveals itself not to be as trustworthy and bug free as much as we hoped and that could be a huge problem for the economy, for companies needlessly dealing with problems that cost money and time to solve. If people said « it’s amazing now and in the next decade it will be production ready and could be used with trust » then it casts a different outlook and different strategies will be taken. But because of the specific and close estimates everything changes even if every 3 years for the next 10 years they say it again. So yeah eventually we’ll get there one day


If you claim for the past three years that it will happen any day now, then I won't believe you about the next three years. Simple as that.


Agree. We will be probably designing projects differently and use different tools in order to make them more manageable by AI code assistants.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: