Why Your AI Analysis Keeps Failing: A Live Diagnosis

Hosted by Caitlin Sullivan

Thu, Jan 15, 2026

1:30 PM UTC (45 minutes)

Virtual (Zoom)

Free to join

Invite your network

Go deeper with a course

AI Analysis for PMs: From Feedback Chaos to Decisions in Hours
Caitlin Sullivan
View syllabus

What you'll learn

3 failure modes behind "AI gave me garbage"

Demoing real examples of broken outputs: quote issues, inconsistent findings—and diagnose exactly why they happen.

The prompt mistakes that look fine but ruin everything

Small wording choices cause wildly different results. I'll break down what's happening so you can spot the problems fast

The 30-second fix most people skip entirely

Most analysis fails pre-prompting. I'll show you what to do with your data first + why skipping it guarantees bad output

Why this topic matters

It's been *multiple years* now since LLMs went mainstream. You've tried AI for analysis, but keep getting something that looks usable and true, but fell apart when you dug in. So now you don't trust it. Or you use it but hedge everything. The problem usually isn't the AI. I'll diagnose common issues live, show you exactly why they break analysis, and give you the fixes fast.

You'll learn from

Caitlin Sullivan

Ex-Head of User Research at Spotify Business/Soundtrack. AI Advisor.

Why trust me?


  • Ex-Head of User Research at embedded in Product—sitting alongside PMs shipping B2B products. I know how fast feedback piles up.


  • Product teams like Maze hire me to fix prompts behind their AI tools—I help them build the functionality that gets customers better AI results.


  • Trained 450+ people at Canva, YouTube, Figma, Meta, and more to use AI for analysis they can actually defend in high-stakes decisions. (My previous course is rated 4.8/5)


  • 1000+ hours testing AI for customer insights. 


Trusted by teams and individuals at

Canva
YouTube
Ramp
Figma

Sign up to join this lesson

By continuing, you agree to Maven's Terms and Privacy Policy.