Explainable AI for Decision-Making Applications

5.0

(10 ratings)

·

10 Days

·

Cohort-based Course

Learn to build accurate, transparent, understandable ML models. Get real-world policy and compliance insights for high-risk applications.

Publications and Featured in

The National Academies of Sciences, Engineering, and Medicine
National Institute of Standards and Technology
American Bar Association
MDPI
SP Global

Course overview

Learn to build accurate, transparent, and interpretable ML/AI models.

In today's rapidly evolving artificial intelligence landscape, engineers and risk managers play pivotal roles in ensuring innovation and accountability. Our course on explainable AI (XAI) equips professionals with essential skills to navigate this rapidly expanding complex terrain. Students will gain hands-on experience exploring compliance considerations derived from real-world applications across diverse sectors. By merging technical expertise with compliance insights, this course empowers individuals to drive impactful, transparent AI solutions in their respective domains, fostering innovation and integrity.


This course explores XAI, covering essential background definitions and concepts, explainable feature engineering, the diverse ecosystem of XAI models, post-hoc explanation methods, and the latest developments in audit and transparency laws and regulations.


Some featured topics in this course on explainable AI (XAI):


• The fANOVA framework

• From penalized GLM to EBM and GAMinet

• Monotonic GBMs

• Surrogate model approaches

• Explainable feature engineering

• LOCO, LOFO, and pertubation

SHAP: The good, bad, and ugly

• Security and bias considerations


Background information and definitions are sourced from the National Institute for Standards and Technology (NIST), the National Academies, and other authoritative sources. The instructor teaches technical subjects with Python examples and compliance lessons based on his real-world experience in consumer finance, employment, and other regulated applications of ML. (Note: The instructor is not an attorney.)


This course is designed primarily for data scientists and ML engineers. Technical risk executives or risk managers may find this course helpful in providing an updated overview of newer ML approaches suited for high-stakes applications. This course may interest others too. Materials may help regulators or policy professionals gain insights into the current state of ML technologies that can be used to comply with laws, regulations, or standards. If you're coming to ML from physics, econometrics, or psychometrics, this course can help you learn how to blend newer ML techniques with established domain expertise and notions of validity or causality.

Who is this course for

01

ML engineers and data scientists who want to learn about XAI.

02

Technical risk executives seeking an updated overview of newer ML approaches suited for high-risk applications.

03

Policy professionals interested in the current state of ML technologies that may be used to comply with regulations and standards.

Learning Objectives

Build accurate, transparent, and understandable ML models.

  • Ecosystem of cutting-edge explainable ML models
  • Essential compliance awareness for ML transparency
  • Emergent audit and transparency laws and regulations

Interpret and explain sophisticated ML algorithms, enabling them to uncover insights, identify biases, and understand complex decisions.

  • Best practices for post-hoc explanation
  • Explainable feature engineering
  • Testing explanation quality

Build safer and more trustworthy AI systems.

Be equipped to build safer and more trustworthy AI systems, tackle real-world problems confidently, and drive positive impact in various domains such as healthcare, finance, and human resources.

This course includes

4 interactive live sessions

Lifetime access to course materials

In-depth lessons

Direct access to instructor

Projects to apply learnings

Guided feedback & reflection

Private community of peers

Course certificate upon completion

Maven Satisfaction Guarantee

This course is backed by Maven’s guarantee. You can receive a full refund within 14 days after the course ends, provided you meet the completion criteria in our refund policy.

Course syllabus

Week 1

Jul 8—Jul 14

    Jul

    8

    Session 1: Introduction to Explainable ML in Context

    Mon 7/82:00 PM—4:00 PM (UTC)

    Jul

    10

    Session 2: The Ecosystem of Explainable Models

    Wed 7/102:00 PM—4:30 PM (UTC)

    The Socio-technical Systems Approach

    0 items

    The Data-scientific Method vs. The Scientific Method

    0 items

    Transparency and Other AI Considerations

    0 items

    Transparency Requirements

    0 items

    Inherent (and Wicked) Difficulties

    0 items

    Making Progress

    0 items

    Explainable Feature Engineering

    0 items

    fANOVA: From Penalized GLM to EBM and GAMinet

    0 items

    Monotonic GBMs

    0 items

    Explainable ML Demo 1

    0 items

    Discussion

    0 items

Week 2

Jul 15—Jul 17

    Jul

    15

    Session 3: Post-hoc Explanation

    Mon 7/152:00 PM—4:30 PM (UTC)

    Jul

    17

    Session 4: Emergent Audit and Transparency Laws and Regulations

    Wed 7/172:00 PM—4:00 PM (UTC)

    Surrogate Model Approaches

    0 items

    Plots of Model Behavior: Partial Dependence, ICE, and ALE

    0 items

    Prototypes and Criticisms

    0 items

    Feature Importance: LOCO, LOFO, and Perturbation

    0 items

    SHAP: The Good, Bad, and Ugly

    0 items

    Security and Bias Considerations

    0 items

    Testing Explanation Quality

    0 items

    Adverse Action Notices

    0 items

    NYC Local Law 144

    0 items

    EU AI Act

    0 items

    NIST AI Risk Management Framework

    0 items

    Explainable ML Demo 2

    0 items

    Discussion

    0 items

5.0

(10 ratings)

What students are saying

Meet Patrick Hall

Patrick Hall

Patrick Hall

Principal Scientist, HallResearch.ai & Assistant Professor, George Washington University

Patrick Hall is the principal scientist at HallResearch.ai. He is also an assistant professor of decision sciences at the George Washington University School of Business, teaching data ethics, business analytics, and machine learning classes. Patrick conducts research in support of NIST's AI Risk Management Framework, works with leading fair lending and AI risk management advisory firms, and serves on the board of directors for the AI Incident Database. Prior to co-founding HallResearch.ai, Patrick was a partner at BNH.AI, where he pioneered the emergent discipline of auditing and red-teaming generative AI systems; he also led H2O.ai's efforts in the development of responsible AI, resulting in one of the world's first commercial applications for explainability and bias mitigation in machine learning. Patrick started his career in global customer-facing roles and R&D roles at SAS Institute.


Patrick studied computational chemistry at the University of Illinois before graduating from the Institute for Advanced Analytics at North Carolina State University. He has been invited to speak on AI and machine learning topics at the National Academies of Sciences, Engineering, and Medicine, the Association for Computing Machinery SIG-KDD (“KDD”), and the American Statistical Association Joint Statistical Meetings. He has been published in outlets like Information, Frontiers in AI, McKinsey.com, O'Reilly Media, and Thomson Reuters Regulatory Intelligence, and his technical work has been profiled in Fortune, WIRED, InfoWorld, TechCrunch, and others. Patrick is the lead author of the book Machine Learning for High-Risk Applications.


With affiliations across private industry, civil society, academia, and government, Patrick brings one of the widest possible perspectives to AI and matters of risk. He has built machine learning software solutions and advised on AI risk for Fortune 100 companies, cutting-edge startups, Big Law, and US and foreign government agencies.

Course Schedule

4-6 hours per week

  • Monday, July 8, 2024

    10:00 AM - 12:00 AM ET

    Live lectures followed by discussion session.

  • Wednesday, July 10, 2024

    10:00 AM - 12:30 AM ET

    Live lectures and demos followed by discussion session.

  • Monday, July 15, 2024

    10:00 AM - 12:30 AM ET

    Live lectures and demos followed by discussion session.

  • Wednesday, July 17, 2024

    10:00 AM - 12:00 AM ET

    Live lectures followed by discussion session.

A pattern of wavy dots

Be the first to know about upcoming cohorts

Explainable AI for Decision-Making Applications

Learning is better with cohorts

Learning is better with cohorts

Active hands-on learning

This course builds on live workshops and hands-on projects

Interactive and project-based

You’ll be interacting with other learners through breakout rooms and project teams

Learn with a cohort of peers

Join a community of like-minded people who want to learn and grow alongside you

Frequently Asked Questions

A pattern of wavy dots

Be the first to know about upcoming cohorts

Explainable AI for Decision-Making Applications