From AI user to AI feature designer: a live exercise

Hosted by Madalina Turlea and Catalina Turlea

Thu, Apr 9, 2026

12:00 PM UTC (45 minutes)

Virtual (Zoom)

Free to join

Invite your network

Go deeper with a course

Build and evaluate your first AI feature
Madalina Turlea and Catalina Turlea
View syllabus

What you'll learn

Get your personalized AI product evaluation score

Use a live diagnostic to see how your AI evals approach compares to 100+ teams and where your biggest gap is.

Shift from AI user to AI feature designer

See the exact same AI output through two lenses and learn why the designer's lens is what ships better products.

Build an evaluation framework from your own reactions

Turn your gut feeling about AI quality into structured criteria you can apply to any AI feature tomorrow.

Why this topic matters

You use AI products daily but rarely get to design evals for your own product. When it's time to evaluate your own AI features, you stop at "vibes" checks cause you don't know how to scale them. This session makes you the user and the designer of the same AI feature in real time, so you leave with the muscle memory to evaluate any AI output systematically.

You'll learn from

Madalina Turlea

Co-founder @Lovelaice, 10+ years in Product

I'm co-founder of Lovelaice and a product leader with 10+ years building products across fintech, payments, and compliance. I hold a CFA charter and have led AI product development in highly regulated environments — where AI failures aren't just embarrassing, they're liabilities.

I've watched smart teams make the same mistakes: choosing models based on benchmarks that don't reflect their use case, writing prompts that work in testing but fail in production, and leaving domain experts out of the loop. These aren't edge cases — they're why 80% of AI projects underperform.

Through these failures (my own included), I developed a systematic approach to AI experimentation that puts domain expertise at the center. I teach what I've learned building Lovelaice: how to test, evaluate, and iterate on AI — before it reaches your users.

Catalina Turlea

Founder @Lovelaice

I bring over 14 years of software development expertise and a decade of startup experience to help teams build AI products that actually work. After founding my first company six years ago, I run a consultancy specializing in helping startups build MVPs, solve complex technical challenges, and integrate AI effectively.

I've seen firsthand how AI projects fail due to lack of systematic experimentation—teams treat AI like traditional software and struggle with inconsistent results. That's why I co-created Lovelace, a platform designed for non-technical professionals to experiment with AI agents systematically.

Sign up to join this lesson

By continuing, you agree to Maven's Terms and Privacy Policy.