Langchain, with its kitchen-sink approach is great for newbies playing around with language models, to get a feel for the different things possible. The uniform interfaces make it very easy to snap together pieces and try things out (iff you don’t care to understand what’s happening under the hood).
Once you know exactly what you need (which might be 10% of Langchain’s capabilities), it might make sense to avoid all the cruft and build your own simple wrappers suited to the task at hand — especially if you need a robust and transparent/debuggable system. Langchain has a little too much indirection, and clumsy abstractions, to be fit for this purpose.
Any examples? How can I see the exact interactions behind the scenes that it's performing in order to DIY? Setting verbose to true doesn't seem to show everything
Sort of. I’ve found it’s value , currently, is not in its intended purpose to build “chains” of tools. The moment you have to do something non-standard you have to build from scratch which is more difficult with all of the abstraction involved.
Where I do find it useful is the many tools it saves me from having to build from scratch. For example I use a page scraper and an embedding service with retries are two things I use it for with the bot I built for my companies slack and discord. In theory I could see using it more in my project eventually. https://github.com/ShelbyJenkins/shelby-as-a-service
Yes, the map-reduce chains for summarisation, to work around context size limits of LLM. It works while there is a lot more debugging to do why is it much slower than expected (but it is fortunately fast enough for the task).
Langchain has a lot of surface area and it is still on 0.0.x, so also much advised to pin the versions hard.
We do, but mostly as a fairly nice and fairly well-documented interface for plugging together tools like vector DBs and memory with LLMs.
My feeling is that with the agentic/AutoGPT hype fading and OpenAI adding the functions API there's a lot less value in LangChain's abstractions, at least for production use cases. They're still cool for hacking/toys.
Some of the abstractions seem a bit forced, but it seems nice to be abstracted away from how you'd handle vector storage or the LLM that get used (maybe someone launches something that does better than GPT4 on some sets of tasks and a few line change to swap over would be nice).
Although LangChain might not be great as a library/framework, it still could be a great starting point to learn about programming pattern for prompt/actor programming.
They used to have a "concepts" page that links to relevant papers but it seems that the page is gone from the doc for some reason :-(
I’ll save yall some time. If you are familiar with concatenating strings in python, langchain is a library which does just that, only a little less cleanly. Hope this helps!
That's a bit cynical. It does many other thing as well. For example, it exposes simple re-usable modules, also known as "functions" in Python. Sometimes it also uses Python "lists" by wrapping them into... classes.
I noticed there is a lot of negativity against langchain on HN. Is it because it's very popular?
For example deeplearning.ai has courses on LangChain. You think someone like Andrew Ng will stand behind an overhyped tech?
It's worth cribbing ideas from. But most of what it does is simple enough and wrapped in enough abstraction that it's often easier to just take those ideas and apply them to your own code vs. using LangChain itself.
Absolutely Andrew NG would make questionable decisions imo. I was shocked to see him teach his classes with Matlab year over year when everyone and their mother was proficient in python.