Now I'm more confused. So does that mediating agent code constitute a separate agent Z, making it three agents X,Y,Z? Explicitly or not (is this the meaningful distinction?) information flowing between them constitutes communication for this purpose.
It's a hypothetical example where I already have two agents and then make one affect the other.
We get what an LLM context is but again trying to tease out what an agent is. Why not play along by actually trying to answer directly so we can be enlightened?
I don't understand what the problem is at this point. You can, without introducing any new agents, have a system that has one LLM context reading from tickets and producing structured outputs, another LLM context that has access to a full read-write SQL-executing MCP, and then normal human code intermediating between the two. That isn't even complicated on the normal scale of LLM coding agents.
Cursor almost certainly has lots of different contexts you're not seeing as it noodles on Javascript code for you. It's just that none of those contexts are designed to express (or, rather, enable agent code to express) security boundaries. That's a problem with Cursor, not with LLMs.
I don't think anyone has a cohesive definition of "agent", and I wish tptacek hadn't used the term "agent" when he said "agent code", but I'll at least say that I now feel confident that I understand what tptacek is saying (even though I still don't think it will work, but we at least can now talk at each other rather than past each other ;P)... and you are probably best off just pretending neither of us ever said "agent" (despite the shear number of times I had said it, I've stopped in my later replies).
The thing I naturally want to say in these discussions is "human code", but that's semantically complicated by the fact that people use LLMs to write that code now. I think of "agent code" as the distinct kind of computing that is hardcoded, deterministic, non-dynamic, as opposed to the stochastic outputs of an LLM.
What I want to push back on is anybody saying that the solution here is to better train an LLM, or to have an LLM screen inputs or outputs. That won't ever work --- or at least, it working is not on the horizon.
Anthropic call this "workflow" style LLM coding rather than "agentic" - as in this blog post (which pretends it is about agents for hype, but actually the most valuable part of it is about workflows).
It's a hypothetical example where I already have two agents and then make one affect the other.