Context Rot with ChromaDB
Hosted by Jason Liu and Kelly Hong
What you'll learn
Context Length Effects on LLM Reliability
Learn how input length degrades model performance through evaluation of 18 state-of-the-art LLMs
Measuring Performance Degradation Patterns
Understand methods to test and quantify how LLMs handle varying context lengths in practice
Non-Uniform Context Processing in LLMs
Discover why LLMs don't process all tokens equally and implications for real-world applications
Why this topic matters
LLMs are increasingly deployed in production systems handling long documents, conversations, and complex prompts.
Understanding context rot is crucial for building reliable AI applications—knowing when and why models fail helps you design better prompts, chunk data effectively, and set appropriate expectations.
This knowledge prevents costly failures in real-world deployments.
You'll learn from
Jason Liu
Consultant at the intersection of Information Retrieval and AI
Jason has built search and recommendation systems for the past 6 years. He has consulted and advised a dozens startups in the last year to improve their RAG systems. He is the creator of the Instructor Python library.
Kelly Hong
Researcher at Chroma
worked with
Go deeper with a course
Systematically Improving RAG Applications

Jason Liu
Staff machine learning engineer, currently working as an AI consultant
Keep exploring