Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wouldn't call it misleading marketing - it is what it is, similar to what you can get today from tools like Langsmith, etc - Observability for the LLM part of your system, but using your existing tools. You can further extend that to monitor specific LLM outputs - but that's just another layer on top of that.


Not talking about just monitoring outputs though. I'm talking about monitoring the internals of the model as it reaches its output. The entire issue around interpretability / observability inside the LLM's model is the hard problem, one for which considerable resources are being dedicated to solve - not simply hooking the public-facing APIs up to observability tools like any other service API. This is just conventional telemetry. Calling this LLM observability implies there is something special about it and unique to LLMs in particular that enhances introspection into the AI model itself, which is not true. The title is highly misleading, classic startup-bro fake-it-til-you-make-it hustling crap, and deserves to be called out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: