When I read the Show HN title, I figured this would would scatter seeds on a virtual map and when you run it "waters them" so you can see a garden on a map similar to Strava.
I can actually add this in future updates. Where you run and the path will be shown having flowers and grass and it will look like each step you took planted something and then you can share it. I am currently working on the share run feature.
The idea of the garden is also the same where you run to unlock a plant and tree for your garden.
The folks at artificial museum [1] would probably provide a good model for how to do this - I could imagine running around in their dataspace, planting seeds which grow over time.
Nathan Marz' talk on Specter (Clojure Library that decouples navigation from transformation) is must-watch if you deal with data: https://www.youtube.com/watch?v=VTCy_DkAJGk
I use it in every project for data navigation and transformation, and it's more performant than standard Clojure data manipulation, while retaining types (instead of coercing back from seqs).
E.g. if you have a map and you want to increment every value in the map:
(require '[com.rpl.specter :as S])
Now let's say you have a map of vectors and want to increment all of those?
(->> {:a 5, :b 6, :c 7}
(S/transform [S/MAP-VALS S/ALL] inc)) ;; note the navigator juts got another level of nesting
=> {:a [2 3], :b [4 5], :c [6 7]}works for all clj data types, and of course it has navigators for recursive walking .
It took me a while to get good at Specter, but it was worth it. I hear Rama uses Specter navigators internally.
Meander is also useful if you need to map from one structure to another, and Malli if you need to write coercisions that check the validity of structures at runtime (aka "schema checking").
I think we can assume even if it is any voting power, it’s far less than 14%. No startup growing like that would give up shares with the same voting rights as the founders
MCP is just a transport + wire format with request/response lifecycle and most importantly: tool-level authorization.
The essay misses the biggest problem with MCP:
1. it does not enable AI agents to functionally compose tools.
2. MCP should not exist in the first place.
LLMs already know how to talk to every API that documents itself with OpenAPI specs, but the missing piece is authorization. Why not just let the AI make HTTP requests but apply authorization to endpoints? And indeed, people are wrapping existing APIs with thin MCP tools.
Personally, the most annoying part of MCP is the lack of support for streaming tool call results. Tool calls have a single request/response pair, which means long-running tool calls can't emit data as it becomes available – the client has to repeat a tool call multiple times to paginate. IMO, MCP could have used gRPC which is designed for streaming. Need an onComplete trigger.
I'm the author of Modex[^1], a Clojure MCP library, which is used by Datomic MCP[^2].
Re: Auth, you are correct that MCP does not specify auth (aside from env vars for e.g. API keys which is host-specific – another gripe of mine).
However, practically the host (e.g. Claude Desktop by Anthropic) asks for permission before calling specific MCP tools.
It is not part of the MCP spec, but it's part of most host implementations of MCP and one of the big practical reasons for MCP's existence is to avoid giving models carte blanche HTTP access.
IMO this should be part of the MCP spec, e.g. "you can call this GET /weather endpoint any time, but to make payments via this POST /transactions request, ask for permission once or always."
Aside: just because someone "defines <X> as something" does not make it true.
And this why I call it EEE: if Anthropic had simply added HTTP API endpoints with auth, everyone would have copied it, and Anthropic would have no control over the spec, which can keep future contenders out.
The MCP spec is a moat, and we’ve seen this movie before, whether they intended it to be or not. I use it, but I don’t have to like it, i.e. MCP should not exist.
As previously stated [^1], OpenAI can one-up them by simply supporting any API endpoint + auth and it would be way more powerful, no moat to defend against startups. No VC-funded corp can resist a moat.
There are plenty of things out there that don’t use OpenAPI. In fact most things aren’t.
Even if the universe was all OpenAPI, you’d still need a lower level protocol to define exactly how the LLM reaches out of the box and makes the OpenAPI call in the first place. That is what MCP does. It’s the protocol for calling tools.
AI can read docs, Swagger, OpenAI and READMEs, so MCP adds nothing here. All you need is an HTTP client with authorization for endpoints.
E.g. in Datomic MCP[^1], I simply tell the model that the tool calls datomic.api/q, and it writes correct Datomic Datalog queries while encoding arguments as EDN strings without any additional READMEs about how EDN works, because AI knows EDN.
And AI knows HTTP requests, it just needs an HTTP client, i.e. we don't need MCP.
So IMO, MCP is an Embrace, Extend (Extinguish?) strategy by Anthropic. The arguments that "foundational model providers don't want to deal with integration at HTTP-level" are uncompelling to me.
All you need is an HTTP client + SSE support + endpoint authz in the client + reasonable timeouts. The API docs will do the rest.
Raw TCP/UDP sockets more dangerous, but people will expose those over MCP anyway.
Most of the bookkeeping in how to make calls should be covered in the same details for how anything else makes a call? I'm not clear what makes this different? If necessary, you could add specific data types for this, but the general ideas are almost certainly close enough to there that a whole new thing doesn't bring in too much?
Are you going to have an OpenAPI client for your local git CLI? Now every tool you might use needs an OpenAPI compliant HTTP server? Sure, if the problem is “how to get AI to talk to web servers,” I could understand this angle. But the problem space is bigger than that, and the goals are different. So having a more effective “protocol” makes sense
Honestly, I could see a pretty easy use case for some OpenAPI endpoint definitions to be pass through to a local tool. As such, adding the ability to translate a URL to a CMD could be useful.
Outside of that, most of the effort has a very 1:1 use. Response code would be exit code, of course. But describing the output would then be the same. The vast majority of an OpenAPI doc is for documentation purposes.
And, I should add, I'm honestly not a huge fan of OpenAPI. I'm just guessing I will also not be a fan of MCP.
I haven't looked at the OpenAI tool calling spec, but the lack of return types in MCP, as reported by Erik Meijers, makes composition hard.
Additionally, the lack of typed encodings makes I/O unavoidable because the model has to interpret the schema of returned text values first to make sense of it before passing it as input to other tools. Makes it impossible to pre-compile transformations while you wait on tool results.
IMO endgame for MCP is to delete MCP and give AI access to a REPL with eval authorized at function-level.
This is why, in the age of AI, I am long dynamic languages like Clojure.
Wow, I'm sorry to hear they took that down. It's a perfectly normal and rational request for accountability on January 6th, which actually I would say makes me feel more favorably about the company.
Hi HN. I built Modex, a Clojure MCP library to augment your AI models with new Tools, Resources (WIP) and Prompts (WIP).
Modex implements the MCP stdio transport and JSON-RPC wire format, which means it does not need an MCP proxy or any Anthropic Java SDK dependencies to be called from most MCP clients.
At work I am using Modex to build simple tools like datomic-mcp[^1] that lets MCP clients like Claude Desktop diagnose support issues against our production Datomic databases.
For example, when a customer opens a support ticket about their VM being unreachable, Claude will inspect our Datomic schema, figure out which queries to run to check server state, try to reach the VM using an MCP ping tool and check the state of the host and VM in our cluster. Over time we hope to automate the bulk of recurring support queries using Modex and other MCP tools.