Lead AI-First Teams Workshop

Elena Luneva

CPO, GM xBraintrust, Nextdoor, OpenTable

Polly Allen

Ex-Alexa AI Principal PM

Lead AI-heavy teams with new playbooks for prototypes, risk assessment, and eval

AI increases IC leverage, but it also breaks the management systems most teams rely on.

Prototypes multiply.
“Quality” becomes subjective.
Risk can ship quietly.
Leadership gets pulled into every review because no one is sure what good looks like, or who is accountable for evaluation, failure modes, and rollout decisions.

If you’re a Director, Head of Product/Eng/Data, or VP with multiple AI initiatives in flight, you’re likely seeing the same pattern: lots of demos, uneven production outcomes, rising cost and reliability questions, and execs asking, “How do we know it’s good?” “What can go wrong?” “What’s the ROI?”

This workshop gives you a practical operating system for leading AI-native work without slowing teams down.

You’ll leave with:

  • a portfolio and cadence that prevents orphan prototypes and experiments,

  • repeatable review process to assess risk and evaluation throughout projects, and

  • a new 'operating system' for AI-forward teams so that AI progress translates into ROI

The result: faster alignment, fewer surprises, and AI work that ships to production with confidence.

What you’ll learn

Lead AI initiatives with clear goals, defensible evaluation, explicit risk decisions, and exec updates that drive alignment and investment.

  • Set a weekly learning review agenda and decision log to drive clarity

  • Build a 90-day AI portfolio (H1/H2/H3 bets, owners, decision gates)

  • Deploy rituals, and ownership patterns that prevent orphan work and that address how AI projects are different

  • Complete and 360 degree stakeholder review and risk register

  • Use a go/no-go rubric and failure mode checklist to make decisions faster

  • Set up systematic risk review processes to prepare for and handle inevitable surprises

  • Apply learnings from real-life eval examples to define key eval metrics

  • Plan for uncertainty in iterating towards launch criteria

  • Apply evals post-launch to create systems that learn, iterate and improve

  • Define quality bars with examples, acceptance criteria, and review gate systems

  • Implement scalable guardrails with rollout controls, monitoring, rollback triggers

  • Produce reusable operating system artifacts during the workshop

Learn directly from Elena & Polly

Elena Luneva

Elena Luneva

f-CPO. Product executive (Braintrust, Nextdoor, OpenTable, LiquidSpace)

Braintrust 🧠
Nextdoor
LiquidSpace I Work from where it works.
OpenTable
BlackRock
Polly Allen

Polly Allen

Ex-Alexa AI PM turned founder at AI Career Boost

Amazon
Elsevier
Schneider Electric
MIT

Who this course is for

  • Director/Head of Product: Leads AI initiatives; turns prototypes into production with clear quality, gates, and exec updates.

  • Eng leader (Sr Mgr/Dir/VP): Ships AI fast while managing reliability, security, and cost with consistent eval + risk reviews.

  • Data/ML leader (Dir/Head): Owns evals and performance; aligns product and execs with shared language and decision-ready tradeoffs.

What's included

Live sessions

Learn directly from Elena Luneva & Polly Allen in a real-time, interactive format.

Templates and prompts playbooks

Playbooks, prompts, system design docs and templates that you can always access.

Video Modules

Watch at your own pace. Modules include live scenarios, practice and come with tools and templates. Lifetime access to rewatch anytime.

Alumni Advantage

Alumni enjoy exclusive rates for future programs and executive labs.

Maven Guarantee

This course is backed by the Maven Guarantee. Students are eligible for a full refund up until the halfway point of the course.

Course syllabus

4 live sessions • 5 lessons

Week 1

Mar 21—Mar 22

    The AI-First Operating System

    • Mar

      21

      The AI-First Operating System

      Sat 3/214:00 PM—5:30 PM (UTC)

    Building Your Discovery & Experimentation Engine

    • Mar

      21

      Building Your Discovery & Experimentation Engine

      Sat 3/215:30 PM—7:00 PM (UTC)

    360° Risk Framework

    • Mar

      22

      360° Risk Framework

      Sun 3/224:00 PM—5:30 PM (UTC)
    5 more items

    Evaluation for Launch Readiness & Beyond

    • Mar

      22

      Evaluation for Launch Readiness & Beyond

      Sun 3/225:30 PM—7:00 PM (UTC)

Schedule

Live sessions

6 hrs / week

    • Sat, Mar 21

      4:00 PM—5:30 PM (UTC)

    • Sat, Mar 21

      5:30 PM—7:00 PM (UTC)

    • Sun, Mar 22

      4:00 PM—5:30 PM (UTC)

    • Sun, Mar 22

      5:30 PM—7:00 PM (UTC)

Projects

4 hrs / week

Async content

6 hrs / week

$997

USD

Mar 21Mar 23
Enroll