Governing GenAI - From Prompts to Policy

New
·

3 Days

·

Cohort-based Course

Learn to validate and govern RAG-based generative AI with cutting-edge human-calibrated testing and NIST AI 600-1 guidance.

Instructors appear in:

The New York Times
WIRED
S&P Global
The National Academies of Sciences, Engineering, and Medicine
National Institute of Standards and Technology

Course overview

Validating Generative AI: Practical Risk Management for RAG Systems and Beyond

This expert-led online course offers a rigorous, practice-based approach to generative AI validation and governance. Participants will learn how to evaluate RAG systems using information retrieval techniques, embedding-based diagnostics, and human-in-the-loop testing—without defaulting to external benchmarks. The course also covers red-teaming and field testing strategies for assessing real-world risk scenarios. Grounded in NIST AI 600-1 guidance and the instructors' experiences in high-stakes domains like consumer finance, this course is designed for professionals who want substance over buzzwords.

For professionals in AI, finance, and risk tasked with building, validating, or governing AI systems

01

Quantitative and Financial Analysts who need to understand how generative AI impacts model-driven investment and risk strategies.

02

Data Scientists and ML Engineers applying advanced techniques to build or assess retrieval-augmented generation (RAG) systems.

03

Model Risk and Validation Professionals ensuring AI systems meet regulatory and internal governance standards.

04

Portfolio and Market Risk Analysts evaluating how AI-related risks can affect financial exposure.

05

Regulators, Consultants, and Solution Providers supporting compliance, oversight, or advisory roles for AI and model governance in financial

Prerequisites

  • Desire to build robust, high-impact AI systems

    Our techniques are not quick hacks. They're for practitioners who want to put the work in to build robust systems for high-impact domains.

  • Experience with AI, machine learning, and Python

    Our techniques aren't simple. You'll need some knowledge of language models, machine learning, and Python to get the most out of the course.

  • Exposure to technology governance approaches

    We don't build slop. This class is for people who understand the best tech is achieved through testing and review.

Course outcomes and benefits:

Industry Trends in Risk Management of Generative AI

Explore emerging industry trends in managing the risks of generative AI, including shifting regulatory expectations and evolving best practices for validation and governance.

A Formal Validation Approach for RAG-based Generative AI Systems

Learn a structured approach to validating RAG-based generative AI systems, grounded in information retrieval theory, text embedding analysis, broad test coverage, and human calibration—designed to ensure outputs are relevant, grounded, and aligned with organizational goals.

Red-teaming and Field Testing  Approaches for Generative AI

Gain practical insights into red-teaming and field testing for generative AI, including techniques to simulate real-world misuse, identify vulnerabilities, and assess system behavior under stress—helping ensure resilience, safety, and compliance in dynamic deployment settings.

Themes of NIST AI 600-1

Understand key themes of NIST AI 600-1, the Generative AI Risk Profile, including risks tied to hallucination, prompt injection, data provenance, and human-AI interaction. Learn how these themes map to broader AI governance goals like transparency, fairness, and accountability.

Participants will also benefit from:

  • Enjoy a live, interactive course without the cost or hassle of travel.
  • Access course materials in advance and anytime via the event portal.
  • Learn through real-world examples and case-based discussions.
  • Join live conversations with peers and request recordings if needed.

What’s included

Live sessions

Learn directly from Patrick Hall & Agus Sudjianto in a real-time, interactive format.

Lifetime access

Go back to course content and recordings whenever you need to.

Community of peers

Stay accountable and share insights with like-minded professionals.

Certificate of completion

Share your new skills with your employer or on LinkedIn.

Maven Guarantee

This course is backed by the Maven Guarantee. Students are eligible for a full refund up until the halfway point of the course.

Course syllabus

3 live sessions • 8 lessons

Week 1

Jul 22—Jul 24

    Jul

    22

    Session 1

    Tue 7/221:30 PM—5:00 PM (UTC)

    Session 1: Introduction to AI Governance and Validation

    2 items

    Jul

    23

    Session 2

    Wed 7/231:30 PM—5:00 PM (UTC)

    Session 2: Validation of Retrieval Augmented Generation (RAG) Systems (Cont.)

    1 item

    Jul

    24

    Session 3

    Thu 7/241:30 PM—5:00 PM (UTC)

    Session 3: Red Teaming and Field Testing

    2 items

Bonus

    Keep Up With Your Instructors

    3 items

Meet your instructors

Patrick Hall

Patrick Hall

Principal Scientist at HallResearch.ai, GWU Faculty

Patrick Hall is principal scientist at HallResearch.ai. He is also teaching faculty at the George Washington University (GWU) School of Business, offering data ethics, business analytics, and machine learning classes to graduate and undergraduate students. Patrick conducts research in support of NIST's AI Risk Management Framework, works with leading fair lending and AI risk management advisory firms, and serves on the board of directors for the AI Incident Database.


Prior to co-founding HallResearch.ai, Patrick was a founding partner at BNH.AI, where he pioneered the emergent discipline of auditing and red-teaming generative AI systems; he also led H2O.ai's efforts in the development of responsible AI products, resulting in one of the world's first commercial applications for explainability and bias management in machine learning. 


Patrick has been invited to speak on AI and machine learning topics at the National Academies, the Association for Computing Machinery SIG-KDD ("KDD"), and the American Statistical Association Joint Statistical Meetings. His expertise has been sought in the New York Times and NPR, he has been published in outlets like Information, Frontiers in AI, McKinsey.com, O'Reilly Media, and Thomson Reuters Regulatory Intelligence, and his technical work has been profiled in Fortune, WIRED, InfoWorld, TechCrunch, and others. Patrick is the lead author of the book Machine Learning for High-Risk Applications.

Career highlights

  • Science and go-to-market lead for early responsible AI products

  • Contributor to NIST AI Risk Management Framework and ARIA AI evaluation program

  • Advisor to Fortune 500 firms and government agencies

  • Author of Machine Learning for High Risk Applications

Agus Sudjianto

Agus Sudjianto

Senior VP, H2o.ai and Former EVP & Head of Model Risk, Wells Fargo

Dr. Agus Sudjianto is the Senior Vice President, Risk and Technology for Enterprise at H2O.ai. He brings over two decades of experience in the financial services industry, with leadership roles in risk management, analytics, and modeling at Wells Fargo and Bank of America. Agus is renowned for pioneering PiML (Python Interpretable Machine Learning), a set of methods and tools for creating interpretable and understandable machine learning models. He has championed the adoption of PiML in the industry, open-sourcing it to democratize models for ensuring reliability, resilience, and fairness in high-risk applications. At H2O.ai, Agus focuses on developing H2O Eval products to address AI safety, reliability, and compliance, with a focus on Generative AI applications in banking and model risk management. Agus holds a PhD in Engineering from Wayne State University and a Master’s degree from MIT.

Career highlights

  • Analytics leader for Ford PowerStroke engines

  • Led model validation at Bank of American and Wells Fargo

  • Inventor of PiML and MoDevA AI software suites

  • Prolific advisor, author, and speaker

A pattern of wavy dots

Join an upcoming cohort

Governing GenAI - From Prompts to Policy

Cohort 1

$895

Dates

July 22—24, 2025

Payment Deadline

July 22, 2025
Get reimbursed

Course schedule

4-6 hours per module

  • Module 1: July 22, 2025

    Tuesday, 9:30 am - 1:00 pm EST

    • Model Governance and Generative AI (Patrick Hall)
    • Formal Validation of Retrieval Augment Generation (RAG) Systems (Agus Sudjianto)
  • Module 2: July 23, 2025

    Wednesday, 9:30 am -1:00 pm EST

    • Formal Validation of Retrieval Augment Generation (RAG) Systems (Continued) (Agus Sudjianto)
  • Module 3: July 24, 2025

    Thursday, 9:30 am -1:00 pm EST

    • Red-teaming for Real-world Assessment of Text-based Generative AI Systems (Patrick Hall)
    • Field Testing for Real-world Assessment of Text-based Generative AI Systems (Patrick Hall)

Learning is better with cohorts

Learning is better with cohorts

Active hands-on learning

This course builds on live workshops and hands-on projects

Interactive and project-based

You’ll be interacting with other learners through breakout rooms and project teams

Learn with a cohort of peers

Join a community of like-minded people who want to learn and grow alongside you

Frequently Asked Questions

A pattern of wavy dots

Join an upcoming cohort

Governing GenAI - From Prompts to Policy

Cohort 1

$895

Dates

July 22—24, 2025

Payment Deadline

July 22, 2025
Get reimbursed

$895

3 Days