Hacker Newsnew | past | comments | ask | show | jobs | submit | 0x696C6961's commentslogin

I use a very similar setup. I initially used nix to manage dev tools, but have since switched to mise and can't recommend it enough https://mise.jdx.dev/

does mise use nix underneath or did you abandon nix entirely?

Mise doesn't use nix. I think the OP is stating he replaced nix with mise.

It's not an analogy.

Sure, but it's easy to check if it's incorrect and try again.

Forgive me if "just dig your way out of the hole" doesn't sound appealing.

You're free to use whatever tools you like.

> You're free to use whatever tools you like.

this is important, i feel like a lot of people are falling in to the "stop liking what i don't like" way of thinking. Further, there's a million different ways to apply an AI helper in software development. You can adjust your workflow in whatever way works best for you. ..or leave it as is.


You're right, though I think a lot of the push back is due to the way companies are pushing AI usage onto employees. Not that complaining on HN will help anything...

Surely you, a programmer, can imagine a way to automate this process

No, I actually haven't made, nor desire to make, a way to automate "thinking about, researching, and solving a problem".

When you use it to lookup a single word, yeah, but people here use it to lookup thousand words at once and then can't check it all.

That doesn't make the tool bad.

That's literally what they are. It's a dead simple self describing JSONRPC API that you can understand if you spend 5 seconds looking at it. I don't get why people get so worked up over it as if it's some big over-engineered spec.

I can run an MPC on my local machine and connect it to an LLM FE in a browser.

I can use the GitHub MCP without installing anything on my machine at all.

I can run agents as root in a VM and give them access to things via an MCP running outside of the VM without giving them access to secrets.

It's an objectively better solution than just giving it CLIs.


All true except that CLI tools are composable and don't pollute your context when run via a script. The missing link for MCP would be a CLI utility to invoke it.

How does the agent know what clis/tools it has available? If there's an `mcpcli --help` that dumps the tool calls, we've just moved the problem.

The composition argument is compelling though. Instead of clis though, what if the agent could write code where the tools are made available as functions?

   tools.get_foo(tools.get_bar())

> what if the agent could write code where the tools are made available as functions?

Exactly, that would be of great help.

> If there's an `mcpcli --help` that dumps the tool calls, we've just moved the problem.

I see I worded my comment completely wrong... My bad. Indeed MCP tool definitions should probably be in context. What I dislike about MCP is that the IO immediately goes into context for the AI Agents I've seen.

Example: Very early on when Cursor just received beta MCP support I tried a Google Maps MCP from somewhere on the net; asked Cursor "Find me boxing gyms in Amsterdam". The MCP call then dumped a HATEOAS-annotated massive JSON causing Cursor to run out of context immediately. If it had been a CLI tool instead, Cursor could have wrapped it in say a `jq` to keep the context clean(er).


I mean what was keeping Cursor from running jq there? It's just a matter of being integrated poorly - which is largely why there was a rethink of "we just made this harder on ourselves, let's accomplish this with skills instead"

The majority of devs do the same thing.

This has been happening before LLMs too.

I wish there was a feature to say "you must re-read X" after each compaction.

Some people use hooks for that. I just avoid CC and use Codex.


Getting the context full to the point of compaction probably means you're already dealing with a severely degraded model, the more effective approach is to work in chunks that don't come close to filling the context window

The problem is that I'm not always using it interactively. I'll give it something that I think is going to be a simple task and it turns out to be complex. It overruns the context, compacts, and the starts doing dumb things.

There's no PostCompact hook unfortunately. You could try with PreCompact and giving back a message saying it's super duper important to re-read X, and hope that survives the compacting.

What would it even mean to "re-read after a compaction"?

To enter a file into the context after losing it through compaction.

From my experience, it's not about helping anyone or CV building. I just ran into a bug or a missing feature that is blocking me.


MCP let's you hide secrets from the LLM


you can do same thing with cli via env vars no?


Yes, I'm using Dagger and it has great secret support, obfuscating them even if the agent, for example, cats the contents of a key file, it will never be able to read or print the secret value itself

tl;Dr there are a lot of ways to keep secret contents away from your agent, some without actually having to keep them "physically" separate


FWIW mise has a task runner built-in too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: