Context Rot: How Input Length Impacts LLM Performance

Hosted by Jason Liu and Kelly Hong

Wed, Sep 17, 2025

5:00 PM UTC (1 hour)

Virtual (Zoom)

Free to join

Invite your network

Go deeper with a course

Systematically Improving RAG Applications
Jason Liu
View syllabus

What you'll learn

Context Length Effects on LLM Reliability

Learn how input length degrades model performance through evaluation of 18 state-of-the-art LLMs

Measuring Performance Degradation Patterns

Understand methods to test and quantify how LLMs handle varying context lengths in practice

Non-Uniform Context Processing in LLMs

Discover why LLMs don't process all tokens equally and implications for real-world applications

Why this topic matters

LLMs are increasingly deployed in production systems handling long documents, conversations, and complex prompts. Understanding context rot is crucial for building reliable AI applications—knowing when and why models fail helps you design better prompts, chunk data effectively, and set appropriate expectations. This knowledge prevents costly failures in real-world deployments.

You'll learn from

Jason Liu

Consultant at the intersection of Information Retrieval and AI

Jason has built search and recommendation systems for the past 6 years. He has consulted and advised a dozens startups in the last year to improve their RAG systems. He is the creator of the Instructor Python library.

Kelly Hong

Researcher at Chroma

worked with

Stitch Fix
Meta
University of Waterloo
New York University

Sign up to join this lesson

By continuing, you agree to Maven's Terms and Privacy Policy.

© 2025 Maven Learning, Inc.