Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Funny there's trillions of dollars in the span of two years literally pointing to the writing on the wall and you're so arrogant and blinded by cope that you can't see it. You legacy engineers really are something else.


You have exactly the same level of conviction toward an unknowable outcome, I think both of you would be better served by reading the middle ground instead of subscribing to a false dichotomy of boom or bust.

I think the biggest confuser here is that there are really two games being played, the money game and the technology game. Investments in AI are going to be largely driven by speculation on their monetary outcome, not technological outcome. Whether or not the technology survives the Venture Capital Gauntlet, the investment bubble could still pop, and only the businesses that have real business models survive. Heaps of people lose their shirt to the tune of billions, yet we still have an AI powered future of some kind.

All this to say, you can both be certain AI is a valuable technology and also believe the economics around it right now are not founded in a clear reality. These are all bets on a future none of us can be sure of.


You can absolutely be sure of market forces not destroying established behemoths. It simply doesn't happen frequently. Inertia is a real thing. Look at Uber, Tesla, etc. I dont think there necessarily won't be a bust for many fledgeling AI companies though, in fact I'm certain there will be.

But thinking Tech Giants are going to crash is woefully ignorant of how the market works and indicates a clear wearing of blinders. And it's a common one among coders who feel the noose tightening and who are the types of people led by their own fear. And i find that when you mix that with arrogance, these three traits often correlate with older generations of software engineers who are poor at adapting to the new technology. The ones who constantly harp on how AI is full of mistakes and disregard that humans are as well. The ones who insist on writing even more than 70% of their own code rather than learning to guide new tools granularly. It's a take that nobody should entertain or respect.

As for your point on 'future none of us can be sure of.' I'll push back on that: It is not clear how AGI or ASI will come about, ie. what architecture will underpin it. However - it is absolutely clear that AI powered coding will continue to improve, and that algorithmic progress can and will be driven by AI coders, and that that will lead to ASI.

The only way to not believe that is to think there is a special sauce behind consciousness. And I tend to believe in scientific theory, not magic.

That is why there is so much VC. That is why tech giants are all racing. It isn't a bet. It is a race to a visible, clear goal of ASI that again, it takes blinders to not see.

So while AI is absolutely a bubble, this bubble will mark the transition to an entirely new economic system, society, world, etc. (and flip a coin on whether any of us survive it lol, but that's a whole separate conversation)


> However - it is absolutely clear that AI powered coding will continue to improve...

Based on what precedent?


The current trend of continual improvement of LLM coding ability to solve previously unseen problems, handle larger codebases, operate for longer periods of time, and improved agentic scaffolding.

The reward-verifier compatability of programming and RL.

Do you have a stronger precedent for that not being the case?


Is it your view that the improvements have been accelerating or constant?

In my view, improvements have been becoming both less frequent and less impressive.


Accelerating. Below is a list of the SOTA's over time (with some slight wiggle room between similar era models)

gpt4 | 3/2023

gpt4-turbo - 11/2023

opus3 | 3/2024

gpt4o | 5/2024

sonnet3.5 | 6/2024

o1-preview | 9/2024

o1 | 12/2024

o3-minihigh | 1/2025

gemini2pro | 2/2025

o3 | 4/2025

gemini2.5pro | 4/2025

opus4 | 5/2025

??? | 8/2025

This is also not to mention the miniaturization and democratization of intelligence that is the smaller models which has also been impressive.

Id say this shows that improvements are becoming more frequent.

---

Each wave of models was a significant step above what came previously. One needs only to step back a generation to be reminded of the intelligence differential.

Some notable differences have been with o3mh and gemini2.5's ability to spit out 1-3k loc(lines of code) with accurate alterations (most of the time). Though better prompting should be used to not do this in general, the ability is impressive.

Context length with gemini 2.5 pro's intelligence is incredible. To load 20k+ loc of a project and recieve a targeted code change that implements a perfect update is ridiculous.

The amount of dropped imports and improper syntax has dramatically reduced.

I'd say this shows improvements are becoming more impressive.

---

Also note the timespan.

We are only 25 months into the explosion kicked off by GPT-4.

We are only 12 months into the reasoning paradigm.

We have barely scratched the surface of agentic tooling and scaffolding.

There are countless architectural improvements and alternatives in development and research.

Infrastructure buildouts and compute scaling are also chugging along, allowing faster training, faster inference, faster testing, etc. etc.

---

This all paints a picture of an acceleration in time and depth of capability


You claim to believe in scientific theory and not magic, but you are asserting many things without evidence.


If you want to be more specific I'd be happy to supply evidence for any assertions I made


Maybe it's the legacy capitalists that are really something else?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: