CPO, GM xBraintrust, Nextdoor, OpenTable
AI Career Boost Founder, Coach, Advisor


AI increases IC leverage, but it also breaks the management systems most teams rely on. Prototypes multiply. “Quality” becomes subjective. Risk can ship quietly. Leadership gets pulled into every review because no one is sure what good looks like, or who is accountable for evaluation, failure modes, and rollout decisions.
If you’re a Director, Head of Product/Eng/Data, or VP with multiple AI initiatives in flight, you’re likely seeing the same pattern: lots of demos, uneven production outcomes, rising cost and reliability questions, and execs asking, “How do we know it’s good?” “What can go wrong?” “What’s the ROI?”
This workshop gives you a practical operating system for leading AI-native work without slowing teams down. You’ll leave with a repeatable review process to assess evaluation and risk, a portfolio and cadence that prevents orphan experiments, and executive-ready updates that translate AI progress into cost, ROI, risk posture, and clear next decisions. The result: faster alignment, fewer surprises, and AI work that ships to production with confidence.
Lead AI initiatives with clear goals, defensible evaluation, explicit risk decisions, and exec updates that drive alignment and investment.
Apply the 5-question leader review (data, evals, failures, impact, rollback)
Use a go/no-go rubric and failure mode checklist to make decisions faster
Define and review scenarios you can reuse across teams
Set a weekly learning review agenda and decision log to drive clarity
Build a 90-day AI portfolio (H1/H2/H3 bets, owners, decision gates)
Deploy rituals, and ownership patterns that prevent orphan work
Map initiatives with the AI Project Map to surface owners, gates, decisions
Spot prototype loops, missing evals, and unclear quality bars early
Audit 3–5 active projects to identify involvement gaps and next steps
Draft exec updates using audience-targeted agentic flows
Frame ROI, cost, and risk posture so stakeholders can make decisions
Run a structured peer review to improve clarity and decision focus
Identify cost drivers (inference, tooling, data, vendors, headcount)
Model value metrics and ROI tied to business outcomes
Run an “AI finance reviewer” to flag weak assumptions and missing metrics
Define quality bars with examples, acceptance criteria, and review gate systems
Implement scalable guardrails with rollout controls, monitoring, rollback triggers
Produce reusable operating system artifacts during the workshop
Director/Head of Product: Leads AI initiatives; turns prototypes into production with clear quality, gates, and exec updates.
Eng leader (Sr Mgr/Dir/VP): Ships AI fast while managing reliability, security, and cost with consistent eval + risk reviews.
Data/ML leader (Dir/Head): Owns evals and performance; aligns product and execs with shared language and decision-ready tradeoffs.
Live sessions
Learn directly from Elena Luneva & Polly Allen in a real-time, interactive format.
Templates and prompts playbooks
Playbooks, prompts, system design docs and templates that you can always access.
Video Modules
Watch at your own pace. Modules include live scenarios, practice and come with tools and templates. Lifetime access to rewatch anytime.
Alumni Advantage
Alumni enjoy exclusive rates for future programs and executive labs.
Maven Guarantee
This course is backed by the Maven Guarantee. Students are eligible for a full refund up until the halfway point of the course.
4 live sessions • 22 lessons
Feb
28
AI Work Patterns: From Experiments to Production Outcomes
Feb
28
Evaluation and Risk Reviews: Make Quality and Risk Decisions Explicit
Mar
1
The AI Team Operating System: Cadence + Portfolio Discipline
Mar
1
Exec and GTM Alignment: The Update That Drives Decisions
Live sessions
6 hrs / week
Sat, Feb 28
5:00 PM—6:30 PM (UTC)
Sat, Feb 28
6:30 PM—8:00 PM (UTC)
Sun, Mar 1
5:00 PM—6:30 PM (UTC)
Sun, Mar 1
6:30 PM—8:00 PM (UTC)
Projects
4 hrs / week
Async content
6 hrs / week
$850
USD