I use a very similar setup. I initially used nix to manage dev tools, but have since switched to mise and can't recommend it enough https://mise.jdx.dev/
this is important, i feel like a lot of people are falling in to the "stop liking what i don't like" way of thinking. Further, there's a million different ways to apply an AI helper in software development. You can adjust your workflow in whatever way works best for you. ..or leave it as is.
You're right, though I think a lot of the push back is due to the way companies are pushing AI usage onto employees. Not that complaining on HN will help anything...
That's literally what they are. It's a dead simple self describing JSONRPC API that you can understand if you spend 5 seconds looking at it. I don't get why people get so worked up over it as if it's some big over-engineered spec.
I can run an MPC on my local machine and connect it to an LLM FE in a browser.
I can use the GitHub MCP without installing anything on my machine at all.
I can run agents as root in a VM and give them access to things via an MCP running outside of the VM without giving them access to secrets.
It's an objectively better solution than just giving it CLIs.
All true except that CLI tools are composable and don't pollute your context when run via a script. The missing link for MCP would be a CLI utility to invoke it.
How does the agent know what clis/tools it has available? If there's an `mcpcli --help` that dumps the tool calls, we've just moved the problem.
The composition argument is compelling though. Instead of clis though, what if the agent could write code where the tools are made available as functions?
> what if the agent could write code where the tools are made available as functions?
Exactly, that would be of great help.
> If there's an `mcpcli --help` that dumps the tool calls, we've just moved the problem.
I see I worded my comment completely wrong... My bad. Indeed MCP tool definitions should probably be in context. What I dislike about MCP is that the IO immediately goes into context for the AI Agents I've seen.
Example: Very early on when Cursor just received beta MCP support I tried a Google Maps MCP from somewhere on the net; asked Cursor "Find me boxing gyms in Amsterdam". The MCP call then dumped a HATEOAS-annotated massive JSON causing Cursor to run out of context immediately. If it had been a CLI tool instead, Cursor could have wrapped it in say a `jq` to keep the context clean(er).
I mean what was keeping Cursor from running jq there? It's just a matter of being integrated poorly - which is largely why there was a rethink of "we just made this harder on ourselves, let's accomplish this with skills instead"
Getting the context full to the point of compaction probably means you're already dealing with a severely degraded model, the more effective approach is to work in chunks that don't come close to filling the context window
The problem is that I'm not always using it interactively. I'll give it something that I think is going to be a simple task and it turns out to be complex. It overruns the context, compacts, and the starts doing dumb things.
There's no PostCompact hook unfortunately. You could try with PreCompact and giving back a message saying it's super duper important to re-read X, and hope that survives the compacting.
Yes, I'm using Dagger and it has great secret support, obfuscating them even if the agent, for example, cats the contents of a key file, it will never be able to read or print the secret value itself
tl;Dr there are a lot of ways to keep secret contents away from your agent, some without actually having to keep them "physically" separate
reply