Cheat at Search Essentials: Evaluation, NDCG, and pals

Hosted by Doug Turnbull

Thu, May 14, 2026

4:00 PM UTC (1 hour)

Virtual (Zoom)

Free to join

Invite your network

Go deeper with a course

Cheat at Search with Agents
Doug Turnbull
View syllabus

What you'll learn

Basics of retrieval evaluation

How the best search teams evaluate and measure search relevance

Where practice diverges from theory

When to ignore traditional search metrics and just trust your A/B tests. How to interpret offline metrics.

Pros / cons of different types of labeled relevance data

Do you use clicks? Human labels? LLM as a judge? What are the pros/cons of these approaches?

Why this topic matters

To get better at search, know the core metrics teams have historically used to evaluate search. See where they stop working, and what to do about them.

You'll learn from

Doug Turnbull

Ex-Reddit, Ex-Shopify. Author: AI Powered Search+Relevant Search

In 2012, Doug got bit by the search bug and he's still trying to keep up. From full-text search, to Learning to Rank models, to search agents that generate their own code, he knows the endless landscape first hand. Yet Doug wants to deeply understand the what / how / why, and help teams use these technologies practically, distinguishing hype from reality.

He’s led search at Reddit, Shopify, and Wikipedia, authored Relevant Search and AI Powered Search, and advised 100+ organizations over the years - all in pursuit of the same question: how does search actually work?

Reddit
Shopify.com
Wikipedia
OpenSource Connections

Sign up to join this lesson

By continuing, you agree to Maven's Terms and Privacy Policy.