Build an AI Growth Optimization System

Hosted by Eric Metelka

Wed, Jun 17, 2026

5:00 PM UTC (45 minutes)

Virtual (Zoom)

Free to join

Invite your network

What you'll learn

Identify where evals end and experiments begin

Recognize where evals stop giving signal and when to run a live experiment

Layer evals, experiments, and analytics

Apply a practical framework for sequencing eval checks, online A/B tests, and Amplitude analytics across the AI product

Automate your experimentation loop

Use AI-native tooling to generate hypotheses, run tests, and surface analysis without manual overhead at each step.

Why this topic matters

Evals tell you your model is working. They don't tell you it's working for your users. The jump from offline evals to live experiments is where most AI product teams stall, and where the teams that don't pull ahead. This session shows you how to build the full loop.

You'll learn from

Eric Metelka

Product Leader, Amplitude Experiment | Lenny's Top 25 Contributor

Eric Metelka leads product for Amplitude Experiment, where his team builds the experimentation and feature flag platform. He has 11 years of experience building growth and experimentation systems across marketplaces, SaaS, and consumer platforms, and is a top 25 contributor to Lenny's Newsletter, where he advises PMs and growth professionals on product and career strategy.

Amplitude
Eppo
Cameo
SpotHero
PowerReviews
See all products from Amplitude AI Analytics

Sign up to join this lesson

By continuing, you agree to Maven's Terms and Privacy Policy.