Principal ML Engineer @ Disney, ex-Meta

Build a production-ready Recommender System Blueprint for a media surface of your choice. This will be an artifact you can take back to work to guide implementation, align stakeholders, and make smarter roadmap calls.
Most teams working on recommender systems don’t struggle because they lack algorithms. They struggle because they lack a practical, end-to-end playbook for shipping.
You might know the building blocks (embeddings, retrieval, ranking, bandits), but still feel stuck on the hard parts:
which metrics actually reflect what product wants
how to architect the system from logging to serving
what model to choose under real constraints
how to handle cold start and sparse data
and how to prove ML is worth it over heuristics
I’ve built recommendation systems in production across news, feeds, and sports over the last 10 years (NYT, Meta, ESPN). This course turns that experience into a clear, reusable process you can apply to your own product without needing FAANG-scale infrastructure.
By the end, you’ll be able to confidently design and evaluate production-ready recommenders, defend trade-offs, and use your Blueprint as a durable reference for future iterations.
Learn to design and evaluate a production-ready media recommender: metrics, architecture, cold start, and drafting an effective roadmap.
Understand the full lifecycle of a production recommender: data → candidate generation → ranking → serving → evaluation → iteration
Learn the systems view behind any recommender decision: components, dependencies, constraints, and failure modes
Walk away able to explain, critique, and improve recommender designs across news, video, and multi-module feeds
Translate vague goals into a metrics spec: KPI, proxy metrics, diagnostics, guardrails, and cost/latency constraints
Learn the most common recsys metric traps (cannibalization, position bias, short-term vs long-term) and how to defend against them
Build a shared language with PMs and leadership so roadmap decisions are grounded in measurable outcomes
Learn how real teams structure recommender architectures
Decide what belongs at each stage, and how to design fallbacks that keep the system reliable
Identify where LLMs actually add value and how to evaluate cost/latency trade-offs.
Build a cold-start playbook for new users, new items, and new surfaces
Design safe fallbacks and measurement so the system works on day one and improves as signals accumulate
Learn techniques that help users/items “graduate” out of cold start
Create a lightweight “proof plan” to validate impact with the smallest credible test (hypotheses, guardrails, and stop conditions)
Gain confidence pushing back on “just add ML” requests and defending the right next investment
Learn to make roadmap calls based on concrete trade-offs

Principal ML Engineer @ Disney, x-Meta, x-NYT
Senior ML Engineers & Data Scientists who can build models but want sharper judgment on what to build next and how to defend it
Software Engineers transitioning into ML who want to think in systems + measurement, not just training code.
Tech Leads who need an end-to-end playbook to review designs, align metrics with product, and avoid expensive detours.
You understand what goals like retention/engagement mean and how success is measured
APIs/services at a high level, latency/reliability constraints
You’ll evaluate ideas and make trade-offs using real measurement, not intuition.

Live sessions
Learn directly from Katerina Zanos in a real-time, interactive format.
Lifetime access
Go back to course content and recordings whenever you need to.
Community of peers
Stay accountable and share insights with like-minded professionals.
Certificate of completion
Share your new skills with your employer or on LinkedIn.
Maven Guarantee
This course is backed by the Maven Guarantee. Students are eligible for a full refund through the second week of the course.
9 live sessions • 16 lessons • 5 projects
Feb
25
Feb
27
Mar
4
Mar
6
.png&w=768&q=75)
Learn how to go from a vague objective to a concrete metrics stack.
Understand how your metrics spec drives core engineering choices in recommendation systems.
Practice using a metrics spec as a shared language with PMs and leadership when building a recommendation system.
Live sessions
2 hrs / week
Wed, Feb 25
4:30 PM—6:00 PM (UTC)
Fri, Feb 27
5:00 PM—6:00 PM (UTC)
Wed, Mar 4
4:30 PM—6:00 PM (UTC)
Assignments
1-2 hrs / week
Use a real use case from your work for the assignment or if you don’t have a use case ready, you can use the provided media case study instead.
$1,300
USD