Shipping an Agent: Lessons from LangChain’s Own Deployment

Hosted by Jason Liu and Anika Somaia

Wed, Nov 26, 2025

6:00 PM UTC (1 hour)

Virtual (Zoom)

Free to join

Invite your network

Go deeper with a course

Featured in Lenny’s List
Systematically Improving RAG Applications
Jason Liu
View syllabus

What you'll learn

Evaluate multi-turn LLM agents in production

Learn methods to test complex reasoning chains and prevent regressions before deployment

Debug common agent failure modes effectively

Identify and resolve memory, context, and prompt drift issues using production tooling

Build feedback loops for AI system improvement

Connect evaluation metrics, logs, and user data to drive iterative product development

Why this topic matters

Deploying LLM agents in production is fundamentally different from building demos. Students will learn battle-tested practices for evaluation, debugging, and iteration that separate successful AI products from failed experiments. Understanding these real-world deployment challenges prepares you to build reliable AI systems that users trust and companies depend on.

You'll learn from

Jason Liu

Consultant at the intersection of Information Retrieval and AI

Jason has built search and recommendation systems for the past 6 years. He has consulted and advised a dozens startups in the last year to improve their RAG systems. He is the creator of the Instructor Python library. 

Anika Somaia

Software Engineer, LangChain

worked with

LangChain
Stitch Fix
Meta
University of Waterloo
New York University

Sign up to join this lesson

By continuing, you agree to Maven's Terms and Privacy Policy.