5.0 (10)
10 Days
·Cohort-based Course
Learn to build accurate, transparent, understandable ML models. Get real-world policy and compliance insights for high-risk applications.
5.0 (10)
10 Days
·Cohort-based Course
Learn to build accurate, transparent, understandable ML models. Get real-world policy and compliance insights for high-risk applications.
Publications and Featured in
Course overview
In today's rapidly evolving artificial intelligence landscape, engineers and risk managers play pivotal roles in ensuring innovation and accountability. Our course on explainable AI (XAI) equips professionals with essential skills to navigate this rapidly expanding complex terrain. Students will gain hands-on experience exploring compliance considerations derived from real-world applications across diverse sectors. By merging technical expertise with compliance insights, this course empowers individuals to drive impactful, transparent AI solutions in their respective domains, fostering innovation and integrity.
This course explores XAI, covering essential background definitions and concepts, explainable feature engineering, the diverse ecosystem of XAI models, post-hoc explanation methods, and the latest developments in audit and transparency laws and regulations.
Some featured topics in this course on explainable AI (XAI):
• The fANOVA framework
• From penalized GLM to EBM and GAMinet
• Monotonic GBMs
• Surrogate model approaches
• Explainable feature engineering
• LOCO, LOFO, and pertubation
• SHAP: The good, bad, and ugly
• Security and bias considerations
Background information and definitions are sourced from the National Institute for Standards and Technology (NIST), the National Academies, and other authoritative sources. The instructor teaches technical subjects with Python examples and compliance lessons based on his real-world experience in consumer finance, employment, and other regulated applications of ML. (Note: The instructor is not an attorney.)
This course is designed primarily for data scientists and ML engineers. Technical risk executives or risk managers may find this course helpful in providing an updated overview of newer ML approaches suited for high-stakes applications. This course may interest others too. Materials may help regulators or policy professionals gain insights into the current state of ML technologies that can be used to comply with laws, regulations, or standards. If you're coming to ML from physics, econometrics, or psychometrics, this course can help you learn how to blend newer ML techniques with established domain expertise and notions of validity or causality.
01
ML engineers and data scientists who want to learn about XAI.
02
Technical risk executives seeking an updated overview of newer ML approaches suited for high-risk applications.
03
Policy professionals interested in the current state of ML technologies that may be used to comply with regulations and standards.
Build accurate, transparent, and understandable ML models.
Interpret and explain sophisticated ML algorithms, enabling them to uncover insights, identify biases, and understand complex decisions.
Build safer and more trustworthy AI systems.
Be equipped to build safer and more trustworthy AI systems, tackle real-world problems confidently, and drive positive impact in various domains such as healthcare, finance, and human resources.
4 interactive live sessions
Lifetime access to course materials
In-depth lessons
Direct access to instructor
Projects to apply learnings
Guided feedback & reflection
Private community of peers
Course certificate upon completion
Maven Satisfaction Guarantee
This course is backed by Maven’s guarantee. You can receive a full refund within 14 days after the course ends, provided you meet the completion criteria in our refund policy.
Explainable AI for Decision-Making Applications
Jul
8
Jul
10
Jul
15
Jul
17
5.0 (10 ratings)
Principal Scientist, HallResearch.ai & Assistant Professor, George Washington University
Patrick Hall is the principal scientist at HallResearch.ai. He is also an assistant professor of decision sciences at the George Washington University School of Business, teaching data ethics, business analytics, and machine learning classes. Patrick conducts research in support of NIST's AI Risk Management Framework, works with leading fair lending and AI risk management advisory firms, and serves on the board of directors for the AI Incident Database. Prior to co-founding HallResearch.ai, Patrick was a partner at BNH.AI, where he pioneered the emergent discipline of auditing and red-teaming generative AI systems; he also led H2O.ai's efforts in the development of responsible AI, resulting in one of the world's first commercial applications for explainability and bias mitigation in machine learning. Patrick started his career in global customer-facing roles and R&D roles at SAS Institute.
Patrick studied computational chemistry at the University of Illinois before graduating from the Institute for Advanced Analytics at North Carolina State University. He has been invited to speak on AI and machine learning topics at the National Academies of Sciences, Engineering, and Medicine, the Association for Computing Machinery SIG-KDD (“KDD”), and the American Statistical Association Joint Statistical Meetings. He has been published in outlets like Information, Frontiers in AI, McKinsey.com, O'Reilly Media, and Thomson Reuters Regulatory Intelligence, and his technical work has been profiled in Fortune, WIRED, InfoWorld, TechCrunch, and others. Patrick is the lead author of the book Machine Learning for High-Risk Applications.
With affiliations across private industry, civil society, academia, and government, Patrick brings one of the widest possible perspectives to AI and matters of risk. He has built machine learning software solutions and advised on AI risk for Fortune 100 companies, cutting-edge startups, Big Law, and US and foreign government agencies.
4-6 hours per week
Monday, July 8, 2024
10:00 AM - 12:00 AM ET
Live lectures followed by discussion session.
Wednesday, July 10, 2024
10:00 AM - 12:30 AM ET
Live lectures and demos followed by discussion session.
Monday, July 15, 2024
10:00 AM - 12:30 AM ET
Live lectures and demos followed by discussion session.
Wednesday, July 17, 2024
10:00 AM - 12:00 AM ET
Live lectures followed by discussion session.
Be the first to know about upcoming cohorts
Active hands-on learning
This course builds on live workshops and hands-on projects
Interactive and project-based
You’ll be interacting with other learners through breakout rooms and project teams
Learn with a cohort of peers
Join a community of like-minded people who want to learn and grow alongside you
Be the first to know about upcoming cohorts