Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Writing code quickly may not help the mythical 10x engineer, but it might help a 1x engineer become a 5x engineer.

In relative terms, the unaided 10x engineer is now only a 2x engineer as his peers are now vastly more productive.

That's huge.



The GP is undermining their own point by even mentioning “5x, opening the door to think about this in linear terms. But the concept of a “10x engineer” was never about how fast they produce code, and the multiplier was never a fixed number. The point was that some engineers can choose the right problems and solve them in a way which the majority would never achieve even with unlimited time.

As an example, if you took away the top half of the engineers on my current team and gave the rest state of the art Copilot (or equivalent best in breed) from 2030, you would not end up with equivalent productivity. What would happen is they would paint themselves into a corner of technical debt faster until they couldn’t write any more code (assisted or unassisted) without creating more bugs than they were solving with each change.

That doesn’t mean the improved tooling isn’t important, but it’s more akin to being a touch typist, or using a modern debugger. It improves your workflow but it doesn’t make you a better software engineer.


Maybe in 2030 the AI will be able to respond to that situation appropriately, like it won't just layer more sticky plasters on top with each additional requirement and make a mess but will re-evaluate the entire history of instructions and rearchitect/refactor the code completely if necessary?

And all this with documentation explaining what it did, and optimising the code for human readability, so that even with huge reworks you can still get the gist in a time measured in hours rather than it taking, what, weeks to do manually?


I mean, looking at the math that powers these models I don't see how they can replace reasoning. The tokens mean absolutely nothing to the algorithm. It doesn't know how algebra works and if you prompt ChatGPT to propose a new theorem based on some axioms it will produce something that sounds like a theorem...

... but believing it is a theorem would be similar to believing that horoscopes can predict the future as well.

Maybe some day we'll have a model that can be trained to reason as humans do and can do mathematics on its own... we've been talking about that possibility for decades in the automated theorem proving space. However it seems that this is a tough nut to crack.

Training LLMs already takes quite a lot of compute resources and energy. Maybe we will have to wait until we have fusion energy and can afford to cool entire data centers dedicated to training these reasoning models as new theorems are postulated and proofs added.

... or we could simply do it ourselves. The energy inputs for humans compared to output is pretty good and affordable.

However having an LLM that also has facilities to interact with an automated theorem proving system would be a handy tool indeed. There are plenty of times in formal proofs where we want to elide the proof of a theorem we want to use because it's obvious and proving it would be tedious and not make the proof you're writing any more elegant; a future reasoning model that could understand the proof goals and use tactics to solve would be a nice tool indeed.

However I think we're still a long way from that. No reason to get hyped about it.


Maybe the prompt engineer will learn from the conversation and remove their dead end questions. Every time you ask a question gpt is just re running it all includinG the previous context. If you start over with a trimmed conversation it's like the player was never there.


... if churning out boilerplate is your definition of "productive".


Churning out boilerplate is a large part of the job. The quality of that boilerplate is part of what makes great engineers great engineers.

Knowing when and how to use dependency injection, making code testable, what to instrument with logs and stats, how and when to build extensible interfaces, avoiding common pitfalls like race conditions, bad time handling, when to add support for retries, speculative execution, etc. are part of parcel of what we do.

If ChatGPT can help raise the quality of work while also increasing its quantity, that'll be a huge leap in productivity.

It's not all there yet, but I've been using it to write some simple programs that I can hand out to ops / business people to help automate or validate some of their tasks to ensure the smooth rollout of a feature I've developed.

The resulting ChatGPT code--I chose Go because I can hand over a big fat self contained binary--avoids certain subtle pitfalls.

For example, the Go HTTP client requires that you call Close() on a response body, but only if there's no error.

The code it spat out does indeed do that correctly. And it's well documented.

It's far from perfect; I've seen a subtle mistake or two in my playing around with it.

I'm now no longer so much an author but an editor for basic stuff. I now have my own dedicated engineer with a quality level that oscillates wildly between fresh intern and grizzled veteran.

It'll only get better over time.


If churning out boilerplate is what your employers sees as product, does it really matter? Coding udeals are valuable in a vacuum but most companies want those PRs merged, technical debt be damned.


If this is the case they need to up their abstraction game.


Isn't most code boilerplate belts out the choir.


what?


"Isn't most code boilerplate?" belts out the choir/peanut gallery/greek chorus




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: