CPO, GM xBraintrust, Nextdoor, OpenTable
Ex-Alexa AI Principal PM


AI increases IC leverage, but it also breaks the management systems most teams rely on.
Prototypes multiply.
“Quality” becomes subjective.
Risk can ship quietly.
Leadership gets pulled into every review because no one is sure what good looks like, or who is accountable for evaluation, failure modes, and rollout decisions.
If you’re a Director, Head of Product/Eng/Data, or VP with multiple AI initiatives in flight, you’re likely seeing the same pattern: lots of demos, uneven production outcomes, rising cost and reliability questions, and execs asking, “How do we know it’s good?” “What can go wrong?” “What’s the ROI?”
This workshop gives you a practical operating system for leading AI-native work without slowing teams down.
You’ll leave with:
a portfolio and cadence that prevents orphan prototypes and experiments,
repeatable review process to assess risk and evaluation throughout projects, and
a new 'operating system' for AI-forward teams so that AI progress translates into ROI
The result: faster alignment, fewer surprises, and AI work that ships to production with confidence.
Lead AI initiatives with clear goals, defensible evaluation, explicit risk decisions, and exec updates that drive alignment and investment.
Set a weekly learning review agenda and decision log to drive clarity
Build a 90-day AI portfolio (H1/H2/H3 bets, owners, decision gates)
Deploy rituals, and ownership patterns that prevent orphan work and that address how AI projects are different
Complete and 360 degree stakeholder review and risk register
Use a go/no-go rubric and failure mode checklist to make decisions faster
Set up systematic risk review processes to prepare for and handle inevitable surprises
Apply learnings from real-life eval examples to define key eval metrics
Plan for uncertainty in iterating towards launch criteria
Apply evals post-launch to create systems that learn, iterate and improve
Define quality bars with examples, acceptance criteria, and review gate systems
Implement scalable guardrails with rollout controls, monitoring, rollback triggers
Produce reusable operating system artifacts during the workshop
Director/Head of Product: Leads AI initiatives; turns prototypes into production with clear quality, gates, and exec updates.
Eng leader (Sr Mgr/Dir/VP): Ships AI fast while managing reliability, security, and cost with consistent eval + risk reviews.
Data/ML leader (Dir/Head): Owns evals and performance; aligns product and execs with shared language and decision-ready tradeoffs.
Live sessions
Learn directly from Elena Luneva & Polly Allen in a real-time, interactive format.
Templates and prompts playbooks
Playbooks, prompts, system design docs and templates that you can always access.
Video Modules
Watch at your own pace. Modules include live scenarios, practice and come with tools and templates. Lifetime access to rewatch anytime.
Alumni Advantage
Alumni enjoy exclusive rates for future programs and executive labs.
Maven Guarantee
This course is backed by the Maven Guarantee. Students are eligible for a full refund up until the halfway point of the course.
4 live sessions • 5 lessons
Mar
21
The AI-First Operating System
Mar
21
Building Your Discovery & Experimentation Engine
Mar
22
360° Risk Framework
Mar
22
Evaluation for Launch Readiness & Beyond
Live sessions
6 hrs / week
Sat, Mar 21
4:00 PM—5:30 PM (UTC)
Sat, Mar 21
5:30 PM—7:00 PM (UTC)
Sun, Mar 22
4:00 PM—5:30 PM (UTC)
Sun, Mar 22
5:30 PM—7:00 PM (UTC)
Projects
4 hrs / week
Async content
6 hrs / week
$997
USD