> assumes that 10x coding speed should 10x productivity
This same error in thinking happens in relation to AI agents too. Even if the agent is perfect (not really possible) but other links in the chain are slower, the overall speed of the loop still does not increase. To increase productivity with AI you need to think of the complete loop, reorganize and optimize every link in the chain. In other words a business has to redesign itself for AI, not just apply AI on top.
Same is true for coding with AI, you can't just do your old style manual coding but with AI, you need a new style of work. Maybe you start with constraint design, requirements, tests, and then you let the agent loose and not check the code, you need to automate that part, it needs comprehensive automated testing. The LLM is like a blind force, you need to channel it to make it useful. LLM+Constraints == accountable LLM, but LLM without constraints == unaccountable.
I’ve been trying to re-orient for this exact kind of workflow and I honestly can’t declare whether it’s working.
I’ve switched to using Rust because of the rich type system and pedantic yet helpful compiler errors. I focus on high level design, traits, important types - then I write integration tests and let Claude go to town. I’ve been experimenting with this approach on my side project (backend web services related to GIS - nothing terribly low level) for about 4 months now and I honestly don’t know if it’s any faster than just writing the code myself. I suspect it’s not or only marginally faster at best.
I often find that I end up in a place where the ai generated code just has too many issues collected over iterations and needs serious refactoring that the agent is incapable of performing satisfactorily. So I must do it myself and that work is substantially harder than it would have been had I just written everything myself in the first place.
At work - I find that I have a deep enough understanding of our codebase that the agents are mostly a net-loss outside of boilerplate.
Perhaps I’m holding it wrong but I’ve been doing this for a while now. I am extremely motivated to build a successful side project and try to bootstrap myself out of the corporate world. I read blogs and watch vlogs on how others build their workflows and I just cannot replicate these claims of huge productivity gains.
This same error in thinking happens in relation to AI agents too. Even if the agent is perfect (not really possible) but other links in the chain are slower, the overall speed of the loop still does not increase. To increase productivity with AI you need to think of the complete loop, reorganize and optimize every link in the chain. In other words a business has to redesign itself for AI, not just apply AI on top.
Same is true for coding with AI, you can't just do your old style manual coding but with AI, you need a new style of work. Maybe you start with constraint design, requirements, tests, and then you let the agent loose and not check the code, you need to automate that part, it needs comprehensive automated testing. The LLM is like a blind force, you need to channel it to make it useful. LLM+Constraints == accountable LLM, but LLM without constraints == unaccountable.