Your AI feature needs evals, not just analytics

Hosted by Dr. Marily Nika

Mon, Jun 8, 2026

8:00 PM UTC (30 minutes)

Virtual (Zoom)

Free to join

Invite your network

Go deeper with a course

Featured in Lenny’s List
AI Product Management Bootcamp & Certification by AI Product Academy
Dr. Marily Nika, AI @ Google, Constantinos Neophytou, and Deb Liu, Former CEO @ Ancestry
View syllabus

What you'll learn

Define what “good” means for an AI feature

Turn vague quality goals like “helpful,” “accurate,” or “personalized” into concrete evaluation criteria your team can a

Build a simple eval set from real user workflows

Create test cases from user journeys, edge cases, common failures, and expected outputs before you ship.

Connect evals to product analytics

Asses not only whether users clicked, but whether the AI output was useful, trusted, and safe.

Why this topic matters

AI teams are shipping features where the output is probabilistic. A dashboard can show adoption, retention, and drop-off, but it cannot tell you whether the model gave a good answer, hallucinated, or created user risk. Evals help you test quality before launch. Analytics helps you understand behavior after launch. Together, they create the feedback loop that tell you what to improve next.

You'll learn from

Dr. Marily Nika

Gen AI PM Lead @ Google | ex-Meta, Fellow @ Harvard | TED AI Speaker | 40 u 40


See all products from Marily

Sign up to join this lesson

By continuing, you agree to Maven's Terms and Privacy Policy.