I was thinking cursor pricing. It becomes a whole different ballgame when you plug these tools into the providers API and pay by the token. Suddenly you really start evaluating how much value you are actually getting out of the tool!
There's also some fundamental limitations to the Desktop MCP experience that are probably never getting fixed; Claude Code can spin off subagents and play around with the context, I assume that Claude Desktop's form factor is basically going to stay the way it is until the end of time lol.
IMO, the big problem with Aider is that it's not agentic. This is good because it means costs are down, but most of the edit-test-fix loop magic in coding agents comes from the agent loop.
There's a few coding MCPs out there. I have also written one (codemcp) and the pitch for mine is that it DOESN'T provide a bash tool by default and checkpoints your filesystem edits every change in Git, so that it's all about feeling comfortable with letting the agent run to completion and then only inspect the final result. The oldest one in the space, I think, is wcgw.
It's a bit different in that it'll save the expected to a different file... IMHO that's usually nicer because the test result is usually big and having it separated makes more sense.
When rerunning it's possible to run pytest with '--force-regen' and then check the git diff to see if all the changes were expected.
It's especially bad for rr, since it doesn't otherwise have any reason to talk to the Internet (I see people mentioning Firefox telemetry, but you know, Firefox is a browser, you expect it to talk to the net).
The best I can think of is to incentivize it other ways; e.g., telemetry only for bug reporting, or a "you ping us, we give you a nice hat" or something.