AI Feature Validation for Product Managers

Madalina Turlea

Founder @Lovelaice | 10 years in Product

Catalina Turlea

Founder @ Lovelaice | 2x founder & CTO

This course is popular

5 people enrolled last week.

Build your first AI eval in 3 hours — on your own use case, with your own data.

Whether you're building your first AI feature or exploring what AI could do in your product, there's a moment every PM hits: you look at the AI output and think "this seems okay?" but you don't have a structured way to know for sure.

You've heard about evals. You know vibe checking isn't enough. But where do you actually start?

This workshop takes you from that starting point to a working evaluation framework, built on your use case, with your data, in 3 hours.

You'll write a prompt, run it across multiple LLMs, read and annotate real AI responses to understand where they break, find failure patterns through error analysis, define automatic metrics based on what you found, and measure whether your improvements actually worked.

You'll leave with your own accuracy metric, a failure taxonomy for your use case, and a repeatable process to evaluate and improve any AI feature from here.

Every participant works hands-on on their own use case. We've run 1,000+ AI experiments with product teams and this workshop is the compressed version of what we've seen work.

Workshop agenda

  • 8:00AM EDT

    From idea to first prompt

    Map your AI feature idea to a testable use case. Write the first version of your system prompt, structured for real product scale, not just for a demo.


  • 8:30AM EDT

    Run your first experiment across multiple models

    Create your first real test cases and run your prompt across multiple LLMs. See how different models handle the same inputs with your actual data.


  • 9:00AM EDT

    Annotate AI responses and spot failures

    Read real AI outputs. Annotate what specifically failed: hallucination, missing info, wrong format. This is the skill that changes how you evaluate AI.


  • 9:30AM EDT

    Write your first evals

    Based on the failures you found, define automatic checks and build an accuracy metric specific to your use case. Not a generic benchmark.


  • 10:00AM EDT

    Improve your prompt and measure the impact

    Improve your prompt based on what you learned. Run it again. Watch your metrics move. This is the loop you'll repeat for every AI feature you ship.


  • 10:30AM EDT

    Final Q&A

    Get answers to your own AI eval use case

What you’ll learn

Design, build, validate and iterate on an AI feature, hands-on

  • How to design a high performing prompt for your own use case

  • How to design input and output formats for your AI feature

  • How to define in detail what good results look like for your AI feature

  • Define deterministic evals for your own use case

  • Understand how different LLMs perform the same task and which one is the optimal for your use case

  • Understand cost, accuracy and latency at scale to build a business case for your AI feature

  • Understand how your annotations and feedback compound to different error categories

  • Extract key updates to the prompt to improve performance

  • Extract meaningful metrics that allow to check for those failure patters

Learn directly from Madalina & Catalina

Madalina Turlea

Madalina Turlea

Product leader with 10+ years in platform and SaaS products | Founder Lovelaice

Finastra
Trading Central
Printify
Catalina Turlea

Catalina Turlea

2x founder & CTO | 14+ years in tech | Founder @Lovelaice

Teleclinic
Nilo
SIXT Group
Freeletics
See all products from Madalina

Who this workshop is for

  • The PM who owns an AI feature but not the process around it

  • The PM about to pitch an AI feature internally

  • The product lead responsible for a whole team's AI quality

What's included

Live sessions

Learn directly from Madalina Turlea & Catalina Turlea in a real-time, interactive format.

Lifetime access

Go back to course content and recordings whenever you need to.

Community of peers

Stay accountable and share insights with like-minded professionals.

Certificate of completion

Share your new skills with your employer or on LinkedIn.

Maven Guarantee

Your purchase is backed by the Maven Guarantee.

Free resource

Setting up your first AI eval with a LLM-as-judge cover image

Setting up your first AI eval with a LLM-as-judge

Most common mistakes to avoid when building an LLM-as-judge

Understand why most teams' LLM judges don't work and the specific mistakes that make them unreliable.

How to write your judge instructions

Learn how to identify what to check through LLM-as-judge, define specific rules, and build the judge prompt

How to evaluate your LLM-as-judge

Know how to evaluate the evaluator by comparing judge scores to human labels and decide if you can trust the results.

Testimonials

  • Thank you so much for this wonderful AI-course. I enjoyed it so much! The material was very understandable with a hands-on-approach. I've learnt a lot!

    Testimonial author image

    Reka

    Start-up Founder at Fluxion
  • Each session takes us deeper into the advantages of AI, from experiment evaluation to exploring how different models behave. I enjoy how your passion and clear explanations make prompt engineering, practical tips, and even industry insights so easy to understand. Learning from you both is a pleasure.

    Testimonial author image

    Maria

    Senior Engineer
  • Even if you’re not sure what kind of AI feature you want to build yet, this course will help you gain the clarity you need. Once you go through the material and understand the concepts, it truly feels like a curtain has been lifted. Catalina and Madalina are amazing teachers. They explain everything in a clear and approachable way, making complex topics easy to understand, even for people without a technical background. The live sessions are truly priceless. Being able to ask questions, see real examples, and learn directly from their experience adds incredible value to the course and makes the learning experience much more impactful. Overall, it’s an excellent starting point for anyone interested in building AI features or simply understanding how modern AI systems work.

    Testimonial author image

    Barbara

    Senior Developer
  • This course was eye opening and Catalina and Madalina were wonderful teachers, supporting us with personal examples and taking the time to answer all of our questions. Some of my highlights on the course were understanding LLMs behind the scenes, learning how to better structure and optimize our prompts (and how important experimentation is for that - also thanks to Lovelaice), and of course creating custom metrics and running evaluations, especially with LLM-as-a-Judge. Thank you for creating this course!

    Testimonial author image

    Pavlina

    Frontend Developer

Frequently asked questions

$200

USD

·
May 28
Enroll