I built something similar to this before Langraph had their agent builder @braid.ink, because Claude Code kept referencing old documentation. But the problem ended up solving itself when Langraph came out with their agent builder, and Claude Code can better navigate its documentation.
The only thing I would mention is that building a lot of agents and working with a lot of plug-ins and MCPs is everything is super situation- and context-dependent. It's hard to spin up a general agent that's useful in a production workflow because it requires so much configuration from a standard template. And if you're not being very careful in monitoring it, then it won't meet your requirements when it's completed, when it comes to agents, precision and control is key.
This really resonates - the opacity problem is exactly what makes MCP-based agents hard to trust in production. You can't control what you can't see.
We built toran.sh specifically for this: it lets you watch real API requests from your agents as they happen, without adding SDKs or logging code. Replace the base URL, and you see exactly what the agent sent and what came back.
The "precision and control" point is key though - visibility is step one, but you also need guardrails. We're working on that layer too (keypost.ai for policy enforcement on MCP pipelines).
Would love to hear what monitoring approaches you've found work well for production agent workflows.
At first I was reading this like 'oh boy here we go, a marketing ploy by ChatGPT when Gemini 3 does the same thing better', but the integration with data streams and specialized memory is interesting.
One thing I've noticed in healthcare is for the rich it is preventative but for everyone else it is reactive. For the rich everything is an option (homeopathics/alternatives), for everyone else it is straight to generic pharma drugs.
AI has the potential to bring these to the masses and I think for those who care, it will bring a concierge style experience.
I’ve been writing about building Agent-First SaaS and working with teams implementing LangGraph flows.
I’ve noticed a recurring pattern where we get stuck trying to perfectly replicate a human's SOP (e.g., "click this button, then read this PDF"). While reproducing human workflows is great for trust and "human-on-the-loop" auditing, I argue it often traps us in a local optimum.
This post explores the difference between "Replica Agents" (biomimicry) and "First-Principles Agents" (optimizing for the objective function). I draw on examples like Amazon's "Chaos Storage" and AlphaGo to suggest that sometimes the most efficient agent workflow looks nothing like the human one.
Curious to hear how others are balancing "legibility" vs. "efficiency" in their agent designs.
I’ve been writing about building Agent-First SaaS and working with teams implementing LangGraph flows.
I’ve noticed a recurring pattern where we get stuck trying to perfectly replicate a human's SOP (e.g., "click this button, then read this PDF"). While reproducing human workflows is great for trust and "human-on-the-loop" auditing, I argue it often traps us in a local optimum.
This post explores the difference between "Replica Agents" (biomimicry) and "First-Principles Agents" (optimizing for the objective function). I draw on examples like Amazon's "Chaos Storage" and AlphaGo to suggest that sometimes the most efficient agent workflow looks nothing like the human one.
Curious to hear how others are balancing "legibility" vs. "efficiency" in their agent designs.
The biggest issue with Nvidia is their revenue is not recurring but the market is treating their stock as it were, which is correlated with all semi stocks, with a one-time massive CAPEX investment lasting 1-2 years.
Simple as this - as to why its just not possible for this to continue.
NVDA stock does not trade at a huge multiple. Only 25x EPS despite very rapid top line growth and a dominant position at the eve of possibly the most important technology transition in the history of humankind. The market is (and has been) pricing in a slowdown.
> > the eve of possibly the most important technology transition in the history of humankind.
Funnily enough when you spend some months thinking into this intensively the result is that a monetary investment into the company that will bring about the singularity / AGI is the most irrational thing one can do.
If the enterprise is successful and the singularity/AGI is benign you won't need money anymore, if the experiment fail the possibility of things going rogue is very high, or even the panic from a possible series of rogue events.
So for the first time the rational thing would be to either spend those money to learn poker/chess/videogames or whatever game we will play with each other to feel cool while the AI takes care of everything else, or maybe outright spend money on coke and strippers given the chance of doomsday.
Their earnings will certainly decline or at least decelerate if capex slows. I’m just saying, if the market wasn’t pricing in a slowdown, NVDA would be trading at 40-60x next year EPS, not 25x.
most important technology transition in the history of humankind but Nvidia itself is not leading the software part? Are they selling shovels or why would they give that part of being the head develop the AGI and GOD?
By what metric? In my opinion actually solving all of the problems we currently have and man we have a lot of them past the obvious ones like climate change, that would be our greatest achievement.
The optimistic thinking is that with thousands/millions of tireless 24/7 brains at work on these problems we could address more problems in biology, chemistry, physics, materials science, etc. faster. It's a bold, but not unreasonable belief.
But that’s not something actually actionable. This isn’t a plan, it’s pure hope that maybe potentially if we just put enough money and energy into this, it will just magically solve our problems. Call me a luddite but that’s a crazy amount of copium.
Lots of hope for sure. This is the biggest most expensive scientific endeavor in my lifetime. I can think of a lot worse things to spend the money/carbon on.
Same could be said of the covid-darlings - zoom, pelaton etc. They got bid up assuming the present to continue into the future. That is the nature of the markets. Same story with fake meat companies. Across time you will find this pattern - 3d printing etc, all ushering some new faddish technology. Also, explains the investments into openai as a hedge against capex slowdown, so there is a captive customer.
This. It’s basic economics. The second there’s a blip the market will be flooded with cheap used GPUs and there will be zero reason to buy new ones. At that point it will be impossible for NVidia to sustain their revenue numbers.
It is an odd position because the P/E and Forward P/E are elevated but not extreme. The bigger warning signal for me is when I see people in public talking about stocks or having stock screens open and this happened almost four times in one week. For me that is basically the sign to risk manage.
Yes, a family member approached me about investing in a scheme (on Robinhood) ran by an infamous dotcom boom huckster. They had already invested 5 figures and shared that another extended family member was 6 figures deep, fully exposed. These are normal people with normal jobs.
> bigger warning signal for me is when I see people in public talking about stocks
Which is interesting "this time" because much of the issue may be due to historical and tax-quirk fullfilling index weighting. And the people invested nearly entirely in these indexes is kinda by definition the people who normally don't even look at their investments. "Irrational autopilot"?
It’s never different this time, this is embedded into human nature and people oscillate between fear and greed. That’s it. Not more complicated than that.
Does the pleb index (so to speak) hold when the cause is less “I’m going to make a ** ton of money” and more “the currency is *** and I have to store value somewhere”?
The only thing I would mention is that building a lot of agents and working with a lot of plug-ins and MCPs is everything is super situation- and context-dependent. It's hard to spin up a general agent that's useful in a production workflow because it requires so much configuration from a standard template. And if you're not being very careful in monitoring it, then it won't meet your requirements when it's completed, when it comes to agents, precision and control is key.
reply