Decomposing a problem so that it is solvable with ease is what I enjoy most about programming and I am fine with no longer having to write as much code myself, but resent having to review so much more.
Now, how do we solve the problem of people blindly accepting what an LLM spat out based on a bad prompt. This applies universally [0] and is not a technological problem.
Agreed on the review burden being frustrating. Two strategies I've found helpful for managing the cognitive load:
1. Tight issue scoping: Making sure each issue is narrowly defined so the resulting PRs are small and focused. Easier to reason about a 50-line change than a 500-line one.
2. Parallel PR workflow: Using git worktrees to have multiple small PRs open simultaneously against the same repo. This lets me break work into digestible chunks while maintaining momentum across different features.
The key insight is that smaller, well-bounded changes are exponentially easier to review thoroughly. When each PR has a single, clear purpose, it's much easier to catch issues and verify correctness.
Im finding these workflow practices help because they force me to engage meaningfully with each small piece rather than rubber-stamping large, complex changes.
> The key insight is that smaller, well-bounded changes are exponentially easier to review thoroughly.
I am not sure if that is the real insight. It appears to me that most people prefer small, well-bounded changes, but it's quite tricky to break down large tasks into small but meaningful changes, isn't it? To me, that appears to be the key.
Exactly - and that's where AI becomes really valuable as a thinking partner. I use Claude Code to have conversations with my codebase about how to slice problems down further.
The issue definition itself becomes something you can iterate on and refactor, just like code. Getting that definition tightly bounded is more critical than ever because without clear boundaries, the AI doesn't know when to stop or what constitutes "done."
It's like having a pair programming session focused purely on problem decomposition before any code gets written. The AI can help you explore different ways to break down the work, identify dependencies, and find natural seams in the problem space.
Decomposing a problem so that it is solvable with ease is what I enjoy most about programming and I am fine with no longer having to write as much code myself, but resent having to review so much more.
Now, how do we solve the problem of people blindly accepting what an LLM spat out based on a bad prompt. This applies universally [0] and is not a technological problem.
0 - https://www.theverge.com/policy/677373/lawyers-chatgpt-hallu...