Production-Ready Recommenders for Media Products

Katerina Zanos

Principal ML Engineer @ Disney, ex-Meta

Design and evaluate a media recommender system that ships to production.

Build a production-ready Recommender System Blueprint for a media surface of your choice. This will be an artifact you can take back to work to guide implementation, align stakeholders, and make smarter roadmap calls.

Most teams working on recommender systems don’t struggle because they lack algorithms. They struggle because they lack a practical, end-to-end playbook for shipping.

You might know the building blocks (embeddings, retrieval, ranking, bandits), but still feel stuck on the hard parts:

  • which metrics actually reflect what product wants

  • how to architect the system from logging to serving

  • what model to choose under real constraints

  • how to handle cold start and sparse data

  • and how to prove ML is worth it over heuristics

I’ve built recommendation systems in production across news, feeds, and sports over the last 10 years (NYT, Meta, ESPN). This course turns that experience into a clear, reusable process you can apply to your own product without needing FAANG-scale infrastructure.

By the end, you’ll be able to confidently design and evaluate production-ready recommenders, defend trade-offs, and use your Blueprint as a durable reference for future iterations.

What you’ll learn

Learn to design and evaluate a production-ready media recommender: metrics, architecture, cold start, and drafting an effective roadmap.

  • Understand the full lifecycle of a production recommender: data → candidate generation → ranking → serving → evaluation → iteration

  • Learn the systems view behind any recommender decision: components, dependencies, constraints, and failure modes

  • Walk away able to explain, critique, and improve recommender designs across news, video, and multi-module feeds

  • Translate vague goals into a metrics spec: KPI, proxy metrics, diagnostics, guardrails, and cost/latency constraints

  • Learn the most common recsys metric traps (cannibalization, position bias, short-term vs long-term) and how to defend against them

  • Build a shared language with PMs and leadership so roadmap decisions are grounded in measurable outcomes

  • Learn how real teams structure recommender architectures

  • Decide what belongs at each stage, and how to design fallbacks that keep the system reliable

  • Identify where LLMs actually add value and how to evaluate cost/latency trade-offs.

  • Build a cold-start playbook for new users, new items, and new surfaces

  • Design safe fallbacks and measurement so the system works on day one and improves as signals accumulate

  • Learn techniques that help users/items “graduate” out of cold start

  • Create a lightweight “proof plan” to validate impact with the smallest credible test (hypotheses, guardrails, and stop conditions)

  • Gain confidence pushing back on “just add ML” requests and defending the right next investment

  • Learn to make roadmap calls based on concrete trade-offs

Learn directly from Katerina

Katerina Zanos

Katerina Zanos

Principal ML Engineer @ Disney, x-Meta, x-NYT

Meta
The New York Times
The Walt Disney Company
ESPN
Columbia University

Who this course is for

  • Senior ML Engineers & Data Scientists who can build models but want sharper judgment on what to build next and how to defend it

  • Software Engineers transitioning into ML who want to think in systems + measurement, not just training code.

  • Tech Leads who need an end-to-end playbook to review designs, align metrics with product, and avoid expensive detours.

Prerequisites

  • Basic product + metrics literacy

    You understand what goals like retention/engagement mean and how success is measured

  • Familiarity with how software systems work in production

    APIs/services at a high level, latency/reliability constraints

  • Comfort reading simple data tables or dashboards

    You’ll evaluate ideas and make trade-offs using real measurement, not intuition.

What's included

Katerina Zanos

Live sessions

Learn directly from Katerina Zanos in a real-time, interactive format.

Lifetime access

Go back to course content and recordings whenever you need to.

Community of peers

Stay accountable and share insights with like-minded professionals.

Certificate of completion

Share your new skills with your employer or on LinkedIn.

Maven Guarantee

This course is backed by the Maven Guarantee. Students are eligible for a full refund through the second week of the course.

Course syllabus

9 live sessions • 16 lessons • 5 projects

Week 1

Feb 25—Mar 1

    Feb

    25

    Metrics Your Team Will Actually Align On

    Wed 2/254:30 PM—6:00 PM (UTC)

    Metrics Your Team Will Actually Align On

    4 items

    Feb

    27

    Optional: Show & Tell: Assignment 1

    Fri 2/275:00 PM—6:00 PM (UTC)
    Optional

Week 2

Mar 2—Mar 8

    Mar

    4

    System Design: Logging → Retrieval → Ranking → Serving

    Wed 3/44:30 PM—6:00 PM (UTC)

    System Design: Logging → Retrieval → Ranking → Serving

    4 items

    Mar

    6

    Optional: Show & Tell: Assignment 2

    Fri 3/65:00 PM—6:00 PM (UTC)
    Optional

Free resource

How to Design a Metrics-First Recommender System cover image

How to Design a Metrics-First Recommender System

Build a Metrics Stack That Reflects Your Product Goal

Learn how to go from a vague objective to a concrete metrics stack.

Connect Metrics to System Design Decisions

Understand how your metrics spec drives core engineering choices in recommendation systems.

Use Metrics to Drive Roadmap & Product Conversations

Practice using a metrics spec as a shared language with PMs and leadership when building a recommendation system.

Schedule

Live sessions

2 hrs / week

    • Wed, Feb 25

      4:30 PM—6:00 PM (UTC)

    • Fri, Feb 27

      5:00 PM—6:00 PM (UTC)

    • Wed, Mar 4

      4:30 PM—6:00 PM (UTC)

Assignments

1-2 hrs / week

Use a real use case from your work for the assignment or if you don’t have a use case ready, you can use the provided media case study instead.

Frequently asked questions

$1,300

USD

Feb 25Mar 25
Enroll