Agentic AI PM Sprint

Aki Wijesundara, PhD

AI Founder | Google AI Accelerator Alum

Manu Jayawardana

AI Advisor | Co-Founder & CEO at Krybe

Stop Building Agent Demos — Ship Agents You Can Trust in Production

Turn agentic AI from experimental prototypes into reliable, production-ready systems. Most agents fail silently in production—broken tool calls, infinite loops, partial task completion, and unpredictable results. The problem isn’t just prompts; it’s the lack of a repeatable system PMs can own.

Agentic AI PM Sprint teaches PMs to design, control, and measure agents that actually complete tasks. You’ll:
✅ Decide when an agent is the right solution versus workflows or simple automation
✅ Map the full agent loop: triggers, planning, tool calls, memory, stopping rules
✅ Define tool contracts (inputs, outputs, retries, permissions) to prevent failures
✅ Build fallbacks, safe mode, and human-in-the-loop escalation
✅ Set metrics that predict success and detect drift early
✅ Ship with staged rollout and monitoring plans PMs can defend

Week by week, move from uncertain prototypes to structured agent specs, measurable KPIs, and robust rollout plans. Teams using this approach cut failed launches 40–60%, reduce post-launch firefighting, and scale faster—replacing guesswork with control, reliability, and stakeholder trust.

What you’ll learn

You’ll create an Agent Spec Pack for a real agent you’re building.

  • Map the agent loop: triggers, planning, tool calls, memory, stopping rules

  • Decide when to use an agent, a workflow, or simpler automation

  • Translate desired outcomes into measurable success criteria

  • Identify why agents break in production and turn vague concerns into actionable fixes.

  • Create a failure taxonomy for agentic tasks and tool interactions

  • Separate leading indicators (tool success, retries, time-to-complete) from lagging indicators (task success, escalation rate)

  • Define inputs, outputs, error states, retries, and permissions for all tools

  • Build fallbacks, safe mode, and human-in-the-loop escalation paths

  • Run lightweight monitoring and weekly agent ops cadences

Learn directly from Aki & Manu

Aki Wijesundara, PhD

Aki Wijesundara, PhD

AI Founder | Educator | Google AI Accelerator Alum

Previous Students from
Google
OpenAI
Meta
Amazon Web Services
NVIDIA
Manu Jayawardana

Manu Jayawardana

AI Advisor | Co-Founder & CEO at Krybe | Co-Founder of Snapdrum

Who this course is for

  • Product managers building agentic AI who want reliable, production-ready agents, not experimental prototypes.

  • PMs with LLM and automation basics who want a practical, data-driven way to define behavior and measure success.

  • PMs shipping AI in regulated industries who want a repeatable, approval-ready process instead of slow cycles.

What's included

Live sessions

Learn directly from Aki Wijesundara, PhD & Manu Jayawardana in a real-time, interactive format.

Hands On Customized Resources

Get access to a customized set of resources

Lifetime Discord Community

Private Discord for peer reviews, job leads, and ongoing support forever.

Guest Sessions

Webinar sessions hosted with industry network

Certificate of completion

Showcase your skills to clients, employers, and your LinkedIn network.

Maven Guarantee

This course is backed by the Maven Guarantee. Students are eligible for a full refund up until the halfway point of the course.

Course syllabus

Week 1

Jan 29—Feb 1

    Lecture 1: Foundations of Agentic AI

    6 items

Week 2

Feb 2—Feb 8

    Lecture 2: Agent Loops & Tool Contracts

    5 items

Schedule

Live sessions

1 hr / week

Live sessions 6 hrs

6 Prerecorded Lectures 6 hrs

2 hrs / week

Short, focused videos that break down the complete AI evaluation framework, designed for quick learning and easy rewatching as you apply it in production.

6+ Office Hour Q&As 6 hrs

2 hrs / week

Open office hours for deep dives, debugging help, and personalized feedback.