3 Days
·Cohort-based Course
Learn to validate and govern RAG-based generative AI with cutting-edge human-calibrated testing and NIST AI 600-1 guidance.
3 Days
·Cohort-based Course
Learn to validate and govern RAG-based generative AI with cutting-edge human-calibrated testing and NIST AI 600-1 guidance.
Instructors appear in:
Course overview
This expert-led online course offers a rigorous, practice-based approach to generative AI validation and governance. Participants will learn how to evaluate RAG systems using information retrieval techniques, embedding-based diagnostics, and human-in-the-loop testing—without defaulting to external benchmarks. The course also covers red-teaming and field testing strategies for assessing real-world risk scenarios. Grounded in NIST AI 600-1 guidance and the instructors' experiences in high-stakes domains like consumer finance, this course is designed for professionals who want substance over buzzwords.
01
Quantitative and Financial Analysts who need to understand how generative AI impacts model-driven investment and risk strategies.
02
Data Scientists and ML Engineers applying advanced techniques to build or assess retrieval-augmented generation (RAG) systems.
03
Model Risk and Validation Professionals ensuring AI systems meet regulatory and internal governance standards.
04
Portfolio and Market Risk Analysts evaluating how AI-related risks can affect financial exposure.
05
Regulators, Consultants, and Solution Providers supporting compliance, oversight, or advisory roles for AI and model governance in financial
Our techniques are not quick hacks. They're for practitioners who want to put the work in to build robust systems for high-impact domains.
Our techniques aren't simple. You'll need some knowledge of language models, machine learning, and Python to get the most out of the course.
We don't build slop. This class is for people who understand the best tech is achieved through testing and review.
Industry Trends in Risk Management of Generative AI
Explore emerging industry trends in managing the risks of generative AI, including shifting regulatory expectations and evolving best practices for validation and governance.
A Formal Validation Approach for RAG-based Generative AI Systems
Learn a structured approach to validating RAG-based generative AI systems, grounded in information retrieval theory, text embedding analysis, broad test coverage, and human calibration—designed to ensure outputs are relevant, grounded, and aligned with organizational goals.
Red-teaming and Field Testing Approaches for Generative AI
Gain practical insights into red-teaming and field testing for generative AI, including techniques to simulate real-world misuse, identify vulnerabilities, and assess system behavior under stress—helping ensure resilience, safety, and compliance in dynamic deployment settings.
Themes of NIST AI 600-1
Understand key themes of NIST AI 600-1, the Generative AI Risk Profile, including risks tied to hallucination, prompt injection, data provenance, and human-AI interaction. Learn how these themes map to broader AI governance goals like transparency, fairness, and accountability.
Participants will also benefit from:
Live sessions
Learn directly from Patrick Hall & Agus Sudjianto in a real-time, interactive format.
Lifetime access
Go back to course content and recordings whenever you need to.
Community of peers
Stay accountable and share insights with like-minded professionals.
Certificate of completion
Share your new skills with your employer or on LinkedIn.
Maven Guarantee
This course is backed by the Maven Guarantee. Students are eligible for a full refund up until the halfway point of the course.
Governing GenAI - From Prompts to Policy
3 live sessions • 8 lessons
Jul
22
Jul
23
Jul
24
Principal Scientist at HallResearch.ai, GWU Faculty
Patrick Hall is principal scientist at HallResearch.ai. He is also teaching faculty at the George Washington University (GWU) School of Business, offering data ethics, business analytics, and machine learning classes to graduate and undergraduate students. Patrick conducts research in support of NIST's AI Risk Management Framework, works with leading fair lending and AI risk management advisory firms, and serves on the board of directors for the AI Incident Database.
Prior to co-founding HallResearch.ai, Patrick was a founding partner at BNH.AI, where he pioneered the emergent discipline of auditing and red-teaming generative AI systems; he also led H2O.ai's efforts in the development of responsible AI products, resulting in one of the world's first commercial applications for explainability and bias management in machine learning.
Patrick has been invited to speak on AI and machine learning topics at the National Academies, the Association for Computing Machinery SIG-KDD ("KDD"), and the American Statistical Association Joint Statistical Meetings. His expertise has been sought in the New York Times and NPR, he has been published in outlets like Information, Frontiers in AI, McKinsey.com, O'Reilly Media, and Thomson Reuters Regulatory Intelligence, and his technical work has been profiled in Fortune, WIRED, InfoWorld, TechCrunch, and others. Patrick is the lead author of the book Machine Learning for High-Risk Applications.
Career highlights
Science and go-to-market lead for early responsible AI products
Contributor to NIST AI Risk Management Framework and ARIA AI evaluation program
Advisor to Fortune 500 firms and government agencies
Author of Machine Learning for High Risk Applications
Senior VP, H2o.ai and Former EVP & Head of Model Risk, Wells Fargo
Dr. Agus Sudjianto is the Senior Vice President, Risk and Technology for Enterprise at H2O.ai. He brings over two decades of experience in the financial services industry, with leadership roles in risk management, analytics, and modeling at Wells Fargo and Bank of America. Agus is renowned for pioneering PiML (Python Interpretable Machine Learning), a set of methods and tools for creating interpretable and understandable machine learning models. He has championed the adoption of PiML in the industry, open-sourcing it to democratize models for ensuring reliability, resilience, and fairness in high-risk applications. At H2O.ai, Agus focuses on developing H2O Eval products to address AI safety, reliability, and compliance, with a focus on Generative AI applications in banking and model risk management. Agus holds a PhD in Engineering from Wayne State University and a Master’s degree from MIT.
Career highlights
Analytics leader for Ford PowerStroke engines
Led model validation at Bank of American and Wells Fargo
Inventor of PiML and MoDevA AI software suites
Prolific advisor, author, and speaker
Join an upcoming cohort
Cohort 1
$895
Dates
Payment Deadline
4-6 hours per module
Module 1: July 22, 2025
Tuesday, 9:30 am - 1:00 pm EST
Module 2: July 23, 2025
Wednesday, 9:30 am -1:00 pm EST
Module 3: July 24, 2025
Thursday, 9:30 am -1:00 pm EST
Active hands-on learning
This course builds on live workshops and hands-on projects
Interactive and project-based
You’ll be interacting with other learners through breakout rooms and project teams
Learn with a cohort of peers
Join a community of like-minded people who want to learn and grow alongside you
Join an upcoming cohort
Cohort 1
$895
Dates
Payment Deadline