How To Balance AI Governance With Operational Realities

Hosted by Stella Liu, Amy Chen, Niharika Srivastav, and Sanjay Saxena

Fri, Apr 3, 2026

7:00 PM UTC (45 minutes)

Virtual (Zoom)

Free to join

Invite your network

Go deeper with a course

Practical AI for Nonprofits
Niharika Srivastav and Sanjay Saxena
View syllabus

What you'll learn

Governance that fits real-world delivery

How to align governance with product timelines, team capacity, technical constraints, and operational pressure.

How to design scenario-based tests for AI systems

Move beyond generic benchmarks and test for real-world failure modes, edge cases, and high-impact production systems.

Setting and calibrating evaluation thresholds for production

How to define minimum launch thresholds, operating thresholds, and escalation thresholds based on risk, business impact,

Monitoring and control after launch

How to think about post-deployment oversight, incident response, and continuous improvement.

Why this topic matters

AI governance often fails in practice because it is either too heavy or too vague. Teams need governance that works in the real world with clear testing, clear ownership, and clear standards for production. That includes how to design scenario-based tests for AI systems and setting and calibrating evaluation thresholds for production AI. The goal is simple: reduce risk, move faster, build trust.

You'll learn from

Stella Liu

Head of AI Applied Science

Stella Liu is an AI Evals practitioner and researcher, specializing in evaluation frameworks for large language models and AI products.

Since 2023, she has led real-world AI evaluation projects, where she established the first AI product evaluation framework for Higher Education and continues to advance research on the safe and responsible use of AI. Her work combines hands-on product experience with academic rigor, bringing proven eval methods into both enterprise and educational contexts.

Before joining the education industry, Stella worked at Shopify and Carvana, where she built data-driven systems that powered product innovation and operational efficiency at scale.

Stella also writes an AI Evals newsletter on Substack: https://datasciencexai.substack.com/

Amy Chen

Cofounder at AI Evals & Analytics

Amy Chen is an AI product engineer with over 10 years of experience building AI-powered products from 0 to 1 since the pre-ChatGPT era.

She has a background in Computational Linguistics and Statistics, and brings deep expertise across AI/ML engineering, data science, and growth.

Amy has worked across AdTech, IoT, Conversational AI, and B2B2C environments, with a focus on building production-ready AI systems and driving go-to-market execution.

She is a Top 1% Mentor in AI/ML Engineering on ADPList and advises early-stage technology startups

Niharika Srivastav

AIGP, AI Governance Advisor, Speaker, Author


Sanjay Saxena

Chief AI Officer (Fractional), CISSP, Ex-Deloitte, KPMG, Harvard

Stanford University
Harvard University
Deloitte
KPMG

Sign up to join this lesson

By continuing, you agree to Maven's Terms and Privacy Policy.