AI Feature Validation for Product Managers

Catalina Turlea

Founder @ Lovelaice | 2x founder & CTO

Madalina Turlea

Founder @Lovelaice | 10 years in Product

Build your first AI eval in 3 hours — on your own use case, with your own data.

Whether you're building your first AI feature or exploring what AI could do in your product, there's a moment every PM hits: you look at the AI output and think "this seems okay?" but you don't have a structured way to know for sure.

You've heard about evals. You know vibe checking isn't enough. But where do you actually start?

This workshop takes you from that starting point to a working evaluation framework, built on your use case, with your data, in 3 hours.

You'll write a prompt, run it across multiple LLMs, read and annotate real AI responses to understand where they break, find failure patterns through error analysis, define automatic metrics based on what you found, and measure whether your improvements actually worked.

You'll leave with your own accuracy metric, a failure taxonomy for your use case, and a repeatable process to evaluate and improve any AI feature from here.

Every participant works hands-on on their own use case. We've run 1,000+ AI experiments with product teams and this workshop is the compressed version of what we've seen work.

Workshop agenda

  • 8:00AM EDT

    From idea to first prompt

    Map your AI feature idea to a testable use case. Write the first version of your system prompt, structured for real product scale, not just for a demo.


  • 8:30AM EDT

    Run your first experiment across multiple models

    Create your first real test cases and run your prompt across multiple LLMs. See how different models handle the same inputs with your actual data.


  • 9:00AM EDT

    Annotate AI responses and spot failures

    Read real AI outputs. Annotate what specifically failed: hallucination, missing info, wrong format. This is the skill that changes how you evaluate AI.


  • 9:30AM EDT

    Write your first evals

    Based on the failures you found, define automatic checks and build an accuracy metric specific to your use case. Not a generic benchmark.


  • 10:00AM EDT

    Improve your prompt and measure the impact

    Improve your prompt based on what you learned. Run it again. Watch your metrics move. This is the loop you'll repeat for every AI feature you ship.


  • 10:45AM EDT

    Final Q&A

    Get answers to your own AI eval use case

Learn directly from Catalina & Madalina

Catalina Turlea

Catalina Turlea

Co-Founder @ Lovelaice | 2x founder & CTO

Madalina Turlea

Madalina Turlea

Co-founder & CPO @ Lovelaice | 10+ years in Product

See all products from Madalina

Who this workshop is for

  • The PM who owns an AI feature but not the process around it

  • The PM about to pitch an AI feature internally

  • The product lead responsible for a whole team's AI quality

What's included

Live sessions

Learn directly from Catalina Turlea & Madalina Turlea in a real-time, interactive format.

Lifetime access

Go back to course content and recordings whenever you need to.

Community of peers

Stay accountable and share insights with like-minded professionals.

Certificate of completion

Share your new skills with your employer or on LinkedIn.

Maven Guarantee

Your purchase is backed by the Maven Guarantee.

Frequently asked questions

$200

USD

Apr 28
Enroll