Run Eval Loops and Guardrails for Cursor Agents

Hosted by Carmelo Iaria

Wed, May 27, 2026

4:00 PM UTC (30 minutes)

Virtual (Zoom)

Free to join

Invite your network

Go deeper with a course

Become an Agentic Architect
Carmelo Iaria
View syllabus

What you'll learn

Eval Criteria Design

How to define pass/fail checks that reflect real production quality.

Lightweight Eval Loop

A repeatable loop to catch regressions in Cursor workflows.

Guardrail Integration

How to connect eval results to runtime safeguards and release decisions.

Why this topic matters

If quality is not measured, it cannot be trusted at scale. Teams that skip eval loops accumulate regressions and hidden risk as they move faster. This session shows how to combine evals and runtime guardrails in Cursor so each release improves reliability instead of introducing new uncertainty.

You'll learn from

Carmelo Iaria

Founder & CEO at Synaptic AI Consulting | Perplexity Business Fellow

AI strategy leader and technology innovator with 30+ years’ global experience spanning Silicon Valley, Europe, and Brazil. Former Head of Innovation at Claro Brasil and Product Management leader at Cisco; founder of Synaptic and The AI Academy. Expert in building and deploying real-world AI solutions for Fortune 500s and high-growth startups. Passionate about bridging business and technology, I empower professionals to master the entire AI lifecycle—turning ideas into working products that deliver measurable impact

Previously at

Cisco
Claro Brasil
Pearson

Sign up to join this lesson

By continuing, you agree to Maven's Terms and Privacy Policy.