🦜🔗 MDS Chat w/ Matt #9 - LangSmith/LangChain

A framework for deploying scalable LLM applications (that isn't without it's faults)

No transcript...

What is LangChain?

So what is LangChain? Well, LangChain is the open-source framework you’ve likely already heard of.

LangChain lets you build apps powered by large language models by abstracting away complexities in tools like the OpenAI API (or just the hard, boring stuff about LLM app deployment). It’s a framework for developing context-aware, reasoning applications with LLMs.

With LangChain, you can forget about:

🙃 Chunking logic

🫥 Reading directories of markdown files

🫠 Unnesting JSON payloads

🫣 Hacking together makeshift data pipelines (just use LCEL)!

Now, that’s pretty cool and will save you a bunch of time, but what’s also cool (and many don’t know) is that LangChain offers a complete suite of tools for discovery, deployment, and observability for your application.

That’s where LangSmith comes in. ⚒️

What is LangSmith?

I’ve seen similar patterns in data workflows— practitioners build systems without consideration for scalability or maintainability. The result is messy data and a refactor a few months down the line. 

Tech moves too fast to make the same mistakes twice, so I’m here to do it right the first time. Seeking out a framework for deployment (LangSmith in this case) is a much better use of resources than a DIY approach for small- to mid-sized teams.

You can think of LangSmith as the unifying platform for the LangChain Universe. You can use it to discover new workflows, then use LangChain to author the specifics (or start from a Template), deploy them with LangServe, and flip back to LangSmith monitor, test, and iterate on your deployment. 

LangSmith provides a suite of observability tools for monitoring your LLM applications: from evaluating responses through annotations or feedback, to testing changes to your deployments and debugging wonky models.

I’d be remiss if I didn’t mention just how addictive it is to play with prompts in Hub. I think it’s a pretty fun little feature that the LangChain team should invest in.

Some Caveats

Now, admittedly, developer sentiment on LangChain is mixed (at best). I’ve personally found the documentation to be confusing and dense. While there are a ton of resources for getting started, they’re incredibly difficult to navigate and spin up.Once you’ve figured that out, however, the process is pretty seamless.

Another common complaint is ecosystem lock-in. While LangChain prevents model lock, it instead locks you into it’s own ecosystem. Because of the nature of LCEL, you’re either all-in on LangChain or not… There’s no way to partially use the framework to just load documents.

I’ve also heard (and experienced) concerns about the open-source codebase. LangChain is a highly opinionated framework and some of the opinions are… questionable.

All of my comments/opinions in the video hold, but LangChain is not a one size fits all solution. I think the real benefit here is the huge open-source community and large base of support. If you’re experimenting with frameworks for developing your own LLM apps and abstracting away the hard stuff, I’d advocate giving LangChain a look in addition to the other tools out there! 😄

Recent Posts