Author of Practical Data Privacy

Ensure your AI platforms and products are ethically sound and legally compliant. This intensive course equips you with the essential skills for modern AI privacy engineering, offering a blend of cutting-edge best practices and practical implementation strategies.
Through live sessions, collaborative design workshops, and a capstone project simulating a real-world challenge, you'll gain expertise in defining privacy, assessing and mitigating risks, implementing pseudonymization and input sanitization, and integrating guardrails into AI workflows.
This course is designed for mid-career and senior-level software engineers, data engineers, machine learning engineers, and privacy professionals seeking to enhance their expertise in a rapidly evolving field.
Participants will work collaboratively in multidisciplinary teams, mastering not only technical skills but also crucial concepts in architecture, risk assessment, and mitigation prioritization. With the increasing use of sensitive data in AI products, these skills are in-demand and future-proof.
Empower your team with real advice and hands-on skills to build privacy into today's AI workflows and systems.
Understand risks introduced by architectural choices, interfaces and undocumented data flows
Prioritize risks based on maturity, scope and product/user understanding
Develop multidisciplinary assessments to communicate risk effectively at large organizations
Hands-on practice with libraries and approaches for pseudonymizing sensitive text and image inputs
Evaluate where LLMs and smaller task-specific models can assist in privacy tooling
Determine where unaddressable risk lies and discuss product-focused interventions
Spot common architectural patterns that prohibit addressing privacy mistakes
Investigate prompt routing, rewriting and local models to build privacy into architecture choices
Enable privacy observability into data flows for privacy failure monitoring and alerting
Build privacy evaluation criteria into model evaluation workflows
Practice Privacy Red Teaming in real-world AI workflows
Prioritize product-specific testing for real-time alerting and observability

Author of Practical Data Privacy, Specialist in AI/ML Systems
Data, Software and Machine Learning Engineers
Technical Privacy Professionals (Privacy Engineers)
Privacy Leadership (technical-oriented)
You should be familiar with Python and able to use normal Python data libraries. You are willing to adapt code and build out AI workflows.
You have technical chops and either want to expand them or integrate them into your work with AI/ML systems
You are comfortable in technical conversations and want to increase your knowledge of AI privacy risk.
18 live sessions β’ 6 projects
Apr
20
What Privacy Is and Isn't
Apr
22
Mapping AI Systems: A Privacy Perspective
Apr
24
Optional: Setting up your environment - Drop In Hours
Apr
27
User Flow: Creating Synthetic Data, Building Initial Evaluations
Apr
29
AI Risk Hunt
May
1
Optional: Office Hours: Setting up your data and evaluations
Live sessions
2-3 hrs / week
Mon, Apr 20
4:00 PMβ5:00 PM (UTC)
Wed, Apr 22
4:00 PMβ5:00 PM (UTC)
Fri, Apr 24
5:00 PMβ6:00 PM (UTC)
Projects
2 hrs / week
Async content
1 hr / week
Optional additional readings and discussions
$1,400
USD