LangChain Is the New ORM: Convenient Until It Is Not

| 4 min read |
langchain ai llm frameworks

LangChain promises to simplify LLM development. Instead it adds abstraction layers you will fight against the moment your use case gets real.

I tried LangChain for a project last month. Within a day I had ripped it out and replaced it with 200 lines of Go. The 200 lines were easier to debug, faster to run, and did exactly what I needed. The LangChain version was doing something. I’m still not sure what.

This isn’t a LangChain hit piece. The team is building in a genuinely hard space, and the ambition is admirable. But the framework has the same disease that every “make everything easy” abstraction catches: it hides the things you actually need to understand.

What You Get

LangChain bundles prompt templates, chain orchestration, tool calling, retrieval helpers, and integrations with every model provider you have heard of. For a demo or prototype, this is genuinely fast. You can wire up a RAG pipeline in an afternoon.

The problem starts when the demo has to become a product.

What You Lose

Visibility. When something goes wrong – and with LLMs, something always goes wrong – you need to know exactly what was sent to the model, what came back, and how long it took. LangChain wraps these calls in layers of abstraction that make debugging feel like archaeology. “The chain produced a bad output” isn’t actionable. “This specific prompt with this context returned this response in 4.2 seconds” is.

Control over tokens and cost. LangChain’s abstractions hide prompt construction details. You don’t see how many tokens your “chain” is consuming until the bill arrives. In a direct API call, the prompt is a string you can measure. In a framework, it’s assembled from pieces across multiple modules. Good luck optimizing something you can’t see.

Stability. The API surface changes constantly. I’m not exaggerating. Examples from three months ago don’t work. Methods get renamed. Modules get restructured. This is understandable for a young project, but if you’re building production software, constant churn isn’t a feature – it’s a risk.

The ORM Parallel

I keep thinking about ORMs. When ActiveRecord or Hibernate first showed up, everyone loved them. Write less SQL. Move faster. Then you hit a complex query and the ORM generates something horrifying, and you spend more time fighting the abstraction than you would have spent writing the SQL.

LangChain is the same pattern. The abstraction is great until your use case doesn’t fit the happy path. Then you’re reading framework source code to understand why your retrieval chain is behaving strangely, and you realize you could have written the whole thing as a function.

When It Actually Helps

I’m not saying never use it. LangChain makes sense in two situations:

Rapid prototyping where the goal is exploration. If you’re testing five different model providers and three retrieval strategies in a week, LangChain’s integrations save real time. Just don’t mistake the prototype for the production system.

Teams with no LLM experience who need to ship something. The framework encodes good patterns. Prompt templates, retrieval, memory management – these are patterns you would need to build anyway. LangChain gives you a starting point. Just plan to outgrow it.

For anything else – especially if you’re writing Go, which LangChain doesn’t natively support well – you’re better off with a thin wrapper around the model API and your own orchestration logic.

My Approach

I keep my LLM integration layer intentionally boring. A prompt builder. An API client with retries and timeouts. A validation layer. A caching layer. That’s four components, each under 100 lines, each completely transparent.

When something breaks, I know where to look. When costs spike, I know which prompt grew. When I need to swap models, I change one function.

The framework would have given me the same capabilities wrapped in ten modules, three base classes, and a configuration DSL. No thanks.

The Takeaway

Use LangChain for prototypes. Use it to learn patterns. But design your production system to not need it. The best LLM integration code I’ve seen is the simplest: direct API calls, explicit prompt management, and clear error handling.

Frameworks are tools, not architectures. The moment you can’t explain what your code is doing without referencing the framework’s internals, you have traded convenience for confusion. And in a space that moves this fast, confusion is the most expensive dependency of all.