I've been working with Langchain to develop a POC and when I try to understand the bugs in my code I feel like I am going down this huge rabbit hole of code. Every time I click into a piece of code there are 10 more places that I have to look that MIGHT be the source of the bug.
I get the idea that things should be abstracted away and that's the point of a library but this feels like a little much.
First of all, every library/framework I've found is moving so fast that all the tutorials and printed material (O'Reilly books etc) are already out of date. Many of the changes are out of necessity, as it's a rapidly developing space, but sometimes it just feels like someone got high and decided to add 3 more layers of abstraction. Although for many tasks, AI coding assistants would be a benefit for noobs like me, the code base and documentation are too loose for me to get the expected benefits I would find in a more established code base.
LangChain seems to be where a lot of the action is with regard to modularity, and using different components in each part of the pipeline. That's important for me, because I need either local or HIPAA-compliant tools (Azure OpenAI works, Anthropic won't return my requests for a BAA, and I need a bigger GPU).
But using LangChain is a pretty horrible experience because, at least for my uses and as a noob, it's much too buried in abstractions to make quick iterations. The GUI-based stuff like flowise and langflow are too limited with regard to available components, and mostly they hide the problems so that errors are tough to address.
I'm thrilled that there has been so much work on adding JSON output and agent stuff at the LLM level, as hopefully it can bring some of these astronauts back to earth (or at least in a low orbit).