Demystify popular AI Features with us - Expense policy agent

Hosted by Catalina Turlea and Madalina Turlea

What you'll learn

Build an AI Expense Policy Agent

How to Test an AI Expense Policy Agent (Like RAMP) Before Building It

Live-demo: Implementing this smart agent together

Design an experiment to test AI policy interpretation
Create realistic test cases (including tricky edge cases)

Evaluate if AI is production-ready for your use case

Estimate costs at scale
Which models get it right
Where AI struggles (and why)
When you need human review

Why this topic matters

Let's build another AI feature, this time for an expense policy. "Most companies waste hours answering the same policy questions:
"Can I expense this?"
"Is this covered?"
"What's the limit for...?"

RAMP built an AI agent that employees can text directly.
By the end, you'll have a complete experiment template on how you could also start testing and evaluating AI.

You'll learn from

Catalina Turlea

Founder @Lovelaice

I bring over 14 years of software development expertise and a decade of startup experience to help teams build AI products that actually work. After founding my first company six years ago, I run a consultancy specializing in helping startups build MVPs, solve complex technical challenges, and integrate AI effectively.

I've seen firsthand how AI projects fail due to lack of systematic experimentation—teams treat AI like traditional software and struggle with inconsistent results. That's why I co-created Lovelace, a platform designed for non-technical professionals to experiment with AI agents systematically.

Madalina Turlea

Co-founder @Lovelaice, 10+ years in Product

I'm a product manager with over 10 years of experience building and leading products across diverse industries. Most recently, I've been leading product development for an AI-backed FinTech, navigating the unique challenge of bringing AI innovation to one of the most regulated environments.

I'm not here as someone who figured out AI from day one—I'm here as a PM who learned the hard way that building AI products is fundamentally different from traditional software development. I've watched my own teams make the same critical mistakes that plague 80% of AI projects: picking one model, writing simple prompts, getting promising early results, then struggling with inconsistency in production.

Through these challenges, I discovered that successful AI product development requires systematic experimentation, explicit domain knowledge integration, and continuous evaluation—not just once, but as an ongoing practice. The goal isn't to check the "AI box" but to deliver AI that genuinely improves users' lives.