9.0
(34 ratings)
2 Weeks
·Cohort-based Course
Learn from a world-leading expert how to design and analyze trustworthy A/B tests to evaluate ideas, integrate AI/ML, and grow your business
Hosted by
Dr. Ronny Kohavi
Technical fellow and VP at Microsoft & Airbnb, co-author of A/B testing book
This course is popular
6 people enrolled last week.
Course overview
Through multiple real examples and real stories, you will see the humbling reality of surprising experiments that ran at companies including Microsoft, Amazon, and Airbnb.
You will understand the cultural and technical challenges in designing and running trustworthy controlled experiments, or A/B tests, including the importance of the Overall Evaluation Criterion (OEC), scaling, pitfalls, Twyman's law, triggering, ethics, and many unaddressed challenges.
01
Data science managers and scientists will be able to design and interpret the experiment results in a trustworthy manner
02
Program managers focused on growth, revenue, conversions, and prioritization will understand how to provide the org with robust clear metric
03
Engineering leaders will be able to make the organizations more data-driven and efficient with fewer severe incidents through A/B tests
The advantage of a live course is the benefit of hearing the real memorable stories, many that don't make it to books or articles, chosen from over 20 years of experimentation: motivation, pushback, success/failure rates
Understand the key benefits of controlled experiments, or A/B tests, including causality, surprising examples, metrics, interpreting results, trust and pitfalls, Twyman's law, and A/A tests
Designing metrics is hard. There is a hierarchy of metrics and perverse incentives. The most important metrics comprise of the OEC - The Overall Evaluation Criterion. We will look at good and bad examples.
Getting numbers is easy; getting numbers you can trust is hard. We'll discuss pitfalls and how to build a reliable and trustworthy experimentation system.
Learn about the cultural challenges, the humbling results (most ideas fail, pivoting, iterating, learning), institutional memory, ideation, prioritization, experimentation platforms
When building AI or machine learning models, using A/B testing and triggering to evaluate the models that were built offline based on historical data
The course focuses on developing the intuition and common misunderstandings, without the details of the statistics, which you can find in many books. We cover p-values, statistical power, and triggering.
You can go as technical as you want in the Q&A and community discussions
When you can't run an A/B test, quasi-experimentation methods and the risks of observational causal studies
What are key challenges in the field
If there is something specific you want to cover, there is time allocated for that
Accelerating Innovation with AB Testing
9.0
(34 ratings)
Dylan Lewis
Ryan Lucht
Pavan Gangisetty
Han Dong
Scott Theisen
Sharath Bulusu
James Niehaus
Ishan Goel
Jakub Linowski
Deborah O'Malley
Aaro Wroblewski
Jialin Huang
Scott Rome
Aaron Gasperi
Jessica Porges
Haiyan Chen
Emma Ding
Manuel de Francisco Vera
Markus Wiggering
Sorin Tarna
Gabriel Rodriguez
Ronny Kohavi was an executive at Amazon, Microsoft, and Airbnb and has over 20 years of experience running A/B tests and leading experimentation teams. He loves to teach, and his papers have over 55,000 citations. He co-authored the best-selling book: Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing (with Diane Tang and Ya Xu), which is a top-10 data mining book on Amazon. He is the most viewed writer on Quora's A/B testing and received the Individual Lifetime Achievement Award for Experimentation Culture in Sept 2020.
Ronny holds a PhD in Machine Learning from Stanford University.
See more at http://www.kohavi.com
01
Introduction
02
End-to-end example, and Metrics and the OEC
03
P-values and power, End-to-end Example 2, Twyman's law and Trustworthy Experimentation
04
Cultural challenges, Prioritization, AI/Machine Learning, Complementary Techniques and Observational Causal Studies
05
Advanced topics, challenges, and requested topics
8-10AM Pacific Time
Three x 2-hour sessions in week 1 on Monday, Tuesday, and Thursday
8-10AM Pacific Time
Two x 2-hour sessions in week 2 on Monday and Thursday
15 minutes after each session
Interested in reading chapter 1 of my book: Trustworthy Online Controlled Experiments : A Practical Guide to A/B Testing?
4 Dec 2023
$1,999 USD
Dates
Dec 4—14, 2023
Payment Deadline
Dec 4, 2023
Real-world examples
We will review multiple real A/B tests
Deep dive design and analysis of two A/B tests
We will deep dive into the full lifecycle of designing an A/B test to answer a hypothesis and analyze the results
Learn with a cohort of peers
Join a community of like-minded people who want to learn and grow alongside you
Requested topics
Missing anything? We have allocated time to suggest topics, collect votes, and discuss them on the last session
Sign up to be the first to know about course updates.
4 Dec 2023
$1,999 USD
Dates
Dec 4—14, 2023
Payment Deadline
Dec 4, 2023