Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm surprised there are developers who seem to not get twice as much done with AI than they did without.

I see it happening right in front of my eyes. I tell the AI to implement a feature that would take me an hour or more to implement and after one or two tries with different prompts, I get a solution that is almost perfect. All I need to do is fine-tune some lines to my liking, as I am very picky when it comes to code. So the implementation time goes down from an hour to 10 minutes. That is something I see happening on a daily basis.

Have you actually tried? Spend some time to write good prompts, use state of the art models (o3 or gemini-2.5 pro) and let AI implement features for you?



Even if what you are saying is true, a significant part of a developer's time is not writing code, but doing other things like thinking about how to best solve a problem, thinking about the architecture, communicating with coworkers, and so on.

So, even if AI helps you write code twice as fast, it does not mean that it makes you twice as productive in your job.

Then again, maybe you really have a shitty job at a ticket factory where you just write boilerplate code all day. In which case, I'm sorry!


I've found that AI is incredibly valuable as a general thinking assistant for those tasks as well. You still need enough expertise to know when to reach for it, what to prompt it with, and how to validate the utility and correctness of its output, but none of that consumes as much time as the time saved in my experience.

I think of it like a sort of coprocessor that's dumber in some ways than my subconscious, but massively faster at certain tasks and with access to vastly more information. Like my subconscious, its output still needs to be processed by my conscious mind in order to be useful, but offloading as much compute as possible from my conscious mind to the AI saves a ton of time and energy.

That's before even getting into its value in generating content. Maybe the results are inconsistent, but when it works, it writes code much more quickly than any human could possibly type. Programming aside, I've objectively saved significant amounts of time and money by using AI to help not only review but also revise and write first drafts of legal documents before roping in lawyers. The latter is something I wouldn't have considered worthwhile to attempt in most cases without AI, but with AI I can go from "knowing enough to be dangerous" to quickly preparing a passable first draft on my own and having my lawyers review the language and tighten up some minor details over email. That's a massive efficiency improvement over the old process of blocking off an hour with lawyers to discuss requirements on the phone, then paying the hourly rate for them to write the first draft, and then going through Q&A/iteration with them over email. YMMV, and you still need to use your best judgement on whether trying this with a given legal task will be a productive use of time, but life is a lot easier with the option than without. Deep research is also pretty ridiculous when you find yourself with a use case for it.

In theory, there's not really anything in particular that I'd say AI lets me do that I couldn't do on my own*, given vastly more hours in the day. In practice, I find that I'm able to not only finish certain tasks more quickly, but also do additional useful things that I wouldn't otherwise have done. It's just a massive force multiplier. In my view, the release of ChatGPT has been about as big a turning point for knowledge work as computers and the Internet were.

*: Actually, that's not even strictly true. I've used AI to generate artwork, both for fun/personal reasons and for business, which I couldn't possibly have produced by hand. (I mean with infinite time I could develop artistic skills, but that's a little reductive.) Video generation is another obvious case like this, which isn't even necessarily just a matter of individual skill, but can also be a matter of having the means and justification to invest money in actors, costumes, props, etc.


> I'm surprised there are developers who seem to not get twice as much done with AI than they did without.

I think it depends a lot on what you work on. There are tasks that are super LLM friendly, and then there are things that have so many constraints that LLM can basically never get it right.

For example, atm we have some really complicated pieces of code that needs to be carefuly untangled and retangled to accomodate a change, and we have to be much more strategic about it to make sure we don't regress anything during the process.


I tried but it is not consistently 1 hour task down to 10 minutes. That is what I think most people experience, some tasks are really hit or miss.

For greenfield "make me a plain JS app that does X" yeah usually it is just able to do a small app that I describe in under 10 mins where I most likely would take far more than an hour to implement as well as just AI does.

For "hey I do have an app in framework X, implement feature that would take me less than 30 mins" - it might hit some issue and loop hanging on its own mistakes, hallucinate dependencies, hallucinate command line parameters and get stuck or just messing whole files. When such things happen I drop it do 'git reset --hard' and move on my own because trying to fix stuff by leading it usually ended up taking me hours fiddling with AI and not progressing on task.

But I also tried to make a greenfield apps with react and angular instead of just "plain JS" and that also mostly went bad getting stuck on some unsolvable issues that I would not have just using default templates/generators on my own.


Can you share a little bit about what your prompting is like, especially for large code bases? Do you typically restrict context to a single file/module or are you able to manage project wide changes? I'm struggling to do any large scale changes as it just eats through tokens and gets expensive very fast. And the quality of output also drops off as the context grows.


There are specific subsets of work at which it can sometimes be a huge boost. That’s a far cry from making me 2x more productive at my job overall.


I mean, I don't disagree with you when you say that something that would take an hour or more to implement would only take 10 minutes or so with AI. That kind of aligns with my personal experience. If something takes an hour, it's probably something that the LLM can do, and I probably should have the LLM do it unless I see some value in doing it myself for knowledge retention or whatever.

But working on features that can fit within a timebox of "an hour or more" takes up very little of my time.

That's what I mean, there are certain contexts where it makes sense to say "yeah, AI made me 2x-10x more productive", but taken as a whole just how productive have you become? Actually being 2x productive as a whole would have a profound impact.


    working on features that can fit
    within a timebox of "an hour or
    more" takes up very little of my time
What would be something that can't be broken down into one-hour tasks? Can you give a concrete example?


Almost everything. What can be done in an hour? The biggest issue for me is I have no idea how to make AI work across multiple services or dozens of packages. Maybe the organization and software needs to be built from the ground up with AI in mind?

For example, I’m rebuilding a legacy backend application and need to do a lot of reverse engineering. There are a dozen upstream and downstream services and nobody in the world knows 100% of what they do. AI doesn’t know how to look across wikis, slack channels, or send test requests and dig through logs but the majority of the work is this because nobody knows the requirements. also a lot of the “code” is not actually code but several layers of auto-generated crap based only on API models. How can I point an AI at the project, say “here’s 20 packages involved with a 3-service call chain that’s only 2/3 documented” and get something useful?

The code is pretty much always the easiest part of my job.

You can try to tell me that this is actually a symptom of a deeper problem or organization rot or bad design choices, and I agree, but that’s out of my control and my main job is to work around this crap and still deliver. It was like this for years before 90% current employees were hired.

To summarize, I work in micro-service hell and I don’t know how to make AI useful at all for the slow parts


That doesn't sound like fun. Have you thought about starting something yourself?

All it takes to make a good living is to make a tool that is useful enough for people to pay you for using it.


Yea my job is a big piece of shit. I’m not interested in starting something myself. Again, the coding is the easiest part and I lack any business skills or network.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: