Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why we chose LangGraph to build our coding agent (qodo.ai)
17 points by timbilt 9 months ago | hide | past | favorite | 9 comments


Article answered "why LangGraph over roll-your-own", but failed to address "why LangGraph" in the broader sense.

All of the points made here are also true for Mastra, for example.

    > One pain point has been documentation. The framework is developing very quickly and the docs are sometimes incomplete or out of date
I also found this to be the case when working with Microsoft's Semantic Kernel in the early days. Thankfully, they had a lot of examples and integration tests demonstrating usage.

Where's the AI startup using LLMs to automatically generate docs, sample code, and guides for libraries?


I think it's fair to say that "roll-your-own" would probably make less sense, but in terms of adopting an "agentic" framework one would have to also research the state the competition is in. Which is to say, I'm not sure what are the alternatives to LangGraph and their maturity, only LlamaIndex comes to mind but I may be obsolete.


    > I think it's fair to say that "roll-your-own" would probably make less sense
It is not particularly difficult to implement the use cases that the article outlines on top of existing lower level SDKs. Yes, I'm aware that some of these platforms offer a lot more capabilities than just DAG flow of prompts, but article's use case can be implemented in less than a day, TBH (from experience doing it twice now (not my choice; other decision makers are hesitant to adopt existing libs and prefer lower level libs...))

I just think the article would be better if it actually answered "Why LangGraph and not these?"


> I just think the article would be better if it actually answered "Why LangGraph and not these?"

Agreed! But what are the _these_?


Article is Python LangGraph, but JS LangGraph has at least one alternative in Mastra and certainly others as well.


When building complex multi-agent systems where each agent has it's own tools, prompt, persona, etc. I've found LangGraph to be better (and easier) than AWS Bedrock, and OpenAI's Agent framework.


We explored LangGraph last November and were pleasantly surprised by the difference with LangChain. The framework had much more care put in it. It was much easier to iterate and the final solutions felt less brittle.

BUt the pricing model and deployment story felt odd. The business model around LangGraph reminded us of Next.js/Vercel, with a solid vendor lock-in and every cent squeezed out of the solution. The lack of clarity on that front made us go with Pydantic AI.


> Testing and mocking is a huge challenge when developing LLM driven systems that aren’t deterministic. Even relatively simple flows are extremely hard to reproduce.

This is by far the most frustrating part of building with LLMs. Is there any good solution out there for any framework?


This in my opinion is the fundamental problem where because LLMs work on a computer there is an assumption of correctness in surprising ways. In reality, when you mix a deterministic system and a probabilistic system you always get a probabilistic system.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: