Agents easily spend >90% of their time waiting for LLMs to reply and optionally executing API calls in other services (HTTP APIs and DBs).
In my experience the performance of the language runtime rarely matters.
If there ever was a language feature that matters for agent performance and scale, it's actually the performance of JSON serialization and deserialization.
Wait, you can't be saying that TypeScript doesn't have a much more powerful type system than Go.
AGDTs, mapped types, conditional types, template literal types, partial higher-kinded types, and real inference on top of all that.
It had one of the most fully loaded type systems out there while the Go team was asking for community examples of where generics might be useful because they're not sure it might be worth it.
In my experience, the 2nd most costly function in agents (after LLM calls) is diffing/patching/merging asynchronous edits to resolve conflicts. Those conflict resolution operations can call out to low-level libraries, but they are still quite expensive optimization problems, compared to serialization etc.
I've used google's old diff-match-patch, a faster python binding of that C++ library fast-diff-match-patch, and biopython (which amazingly supports unicode!)
1. follow rich hickey's advice and orchestrate all llms to mutate a single shared state
2. let those llms operate asynchronously in parallel
3. when an llm wants to mutate the global state but the state has changed since it's checkout, try to safely merge changes using an expensive diff algorithm (which is still cheaper than the llm); on failure retry
In my experience the performance of the language runtime rarely matters.
If there ever was a language feature that matters for agent performance and scale, it's actually the performance of JSON serialization and deserialization.