Guardrails for Coding Agents: Pointing Them the Right Way

Hosted by Feifan Zhou

Tue, Dec 16, 2025

7:00 PM UTC (30 minutes)

Virtual (Zoom)

Free to join

Invite your network

What you'll learn

Ways to keep coding agents on-track

Learn about different tools that can be used to constrain agent output, ranging from deterministic to natural-language.

When and where to use each tool

Each tool has different trade-offs and performance implications; many of them can be used together.

Keeping guardrails up-to-date

Treat guardrails as first-class in your codebase. Share ownership across teams and update them as the code evolves.

From guardrails to patterns

Code is structured text with repeating patterns. Use tools to enforce what you want and migrate from what you don’t.

Why this topic matters

AI coding agents are reshaping how developers work. Learn how to set effective guardrails that keep agents aligned, reliable, and evolving with your codebase so your tools stay helpful, not harmful, as your projects scale.

You'll learn from

Feifan Zhou

Tanagram CEO & Cofounder

As CEO and cofounder of Tanagram, Feifan is analyzing codebases using knowledge graphs and anomaly detection to enable agents and engineering teams to ship faster and avoid bugs. Prior to Tanagram, Feifan was a senior engineer at Stripe where he helped launch the Issuing product; conducted independent research into developer tooling and software development; and started the AppDev project team at Cornell University.

Previously at

Stripe
Cornell University

Sign up to join this lesson

By continuing, you agree to Maven's Terms and Privacy Policy.