4.4 (66)
4 Weeks
·Cohort-based Course
#1 AI Security Course. Brought to you by the team behind Learn Prompting & HackAPrompt, backed by OpenAI.
This course is popular
9 people enrolled last week.
4.4 (66)
4 Weeks
·Cohort-based Course
#1 AI Security Course. Brought to you by the team behind Learn Prompting & HackAPrompt, backed by OpenAI.
This course is popular
9 people enrolled last week.
Led AI Prompting & Security workshops at
Course overview
Prompt injection attacks are the #1 security vulnerability in enterprise AI systems. Attackers can manipulate your AI chatbots to steal sensitive data, access internal documents, and bypass your safety guardrails. Most organizations using AI don't even know they're at risk.
In 2023, I partnered with OpenAI to run HackAPrompt, the 1st Generative AI Red Teaming Competition ever, to study how the AI models can be hijacked. My research has since been used by teams at OpenAI, Google DeepMind, Meta, and Anthropic to improve their model's resistance to prompt injections. Most notably, OpenAI used it to make their model's up to 46% more secure.
Now, I advise Governments & Enterprises how to secure their AI Systems, including teams at OpenAI, Microsoft, Harvard University, Deloitte, Stanford University, and more. And because I love to teach... I created this Masterclass to teach you everything I know about AI Red Teaming!
🛡️ About This Course:
This On-Demand Masterclass is the #1 AI Security course for Cybersecurity Professionals, AI Trust & Safety leads, AI Product Managers, and AI Engineers who want to secure their AI systems against real-world threats.
You'll gain hands-on experience identifying vulnerabilities in Generative AI systems, including prompt injections, jailbreaks, and adversarial attacks. Using the HackAPrompt playground, you'll practice both attacking and defending AI systems, learning how to break them and how to secure them.
This course is practical, not just theoretical. You'll work on real-world projects, analyzing live AI systems for vulnerabilities. These projects prepare you for the AI Red Teaming Certified Professional (AIRTP+) Exam, a 24+ hour assessment that validates your AI security expertise. We've trained 1000's of professionals from Microsoft, Google, Capital One, IBM, ServiceNow, and Walmart. Graduates passed the certification exam and are now AIRTP+ certified, applying their skills to secure AI systems worldwide.
👨🏫 About Your Instructor:
I'm Sander Schulhoff. I'm an award-winning AI researcher and the youngest-ever recipient of the Best Paper Award at EMNLP 2023, selected from over 20,000 submissions. My research on AI security has been cited by OpenAI, Anthropic, DeepMind, Microsoft, Google, and NIST to improve their models.
I published the internet's 1st Prompt Engineering Guide two months before ChatGPT launched. Since then, I've partnered with OpenAI on a ChatGPT course and taught over 3 million people how to use Generative AI. Most recently, I led researchers from OpenAI, Microsoft, Google, and Stanford on The Prompt Report, the most comprehensive study on Prompt Engineering to date.
🌟 Learn from the World's Top AI Security Experts:
• Pliny the Prompter - The World's Most Famous AI Jailbreaker
The most renowned AI Jailbreaker, who has successfully jailbroken every major AI model—including OpenAI's o1, which hasn't even been made public!
• Joseph Thacker - Solo Founder & Top Bug Bounty Hunter
As a solo founder and security researcher, Joseph has submitted over 1,000 vulnerabilities across HackerOne and Bugcrowd.
• Jason Haddix - Former CISO of Ubisoft
Bug bounty hunter with over 20-years of experience in cybersecurity as the CISO of Ubisoft and Director of Penetration Testing at HP.
• Valen Tagliabue - Winner of HackAPrompt 1.0
An AI researcher specializing in NLP and cognitive science. Part of the winning team in HackAPrompt 1.0.
• David Williams-King - AI Researcher under Turing Prize Recipient
Research scientist at MILA, researching under Turing Prize winning Yoshua Bengio on his Safe AI For Humanity team.
• Leonard Tang - Founder and CEO of Haize Labs
NYC-based AI safety startup providing cutting-edge evaluation tools to leading companies like OpenAI and Anthropic.
• Richard Lundeen - Microsoft's AI Red Team Lead
Principal Software Engineering Lead for Microsoft's AI Red Team and maintainer of Microsoft PyRit.
• Johann Rehberger - Founded a Red Team at Microsoft
Built Uber's Red Team and discovered attack vectors like ASCII Smuggling. Found Bug Bounties in every major Gen AI model.
• Sandy Dunn - CISO with 20+ years of Experience in Healthcare
A seasoned CISO with 20+ years of experience in healthcare. Project lead for the OWASP Top 10 Risks for LLM Applications Cybersecurity and Governance.
• Donato Capitella - AI Security Blogger and Youtuber
With over 12 years of experience in offensive security and security assurance, Donato has gained a following for his AI security work. Alongside his blogs on WithSecure, he has taught over 300k people about building and breaking AI systems on his YouTube channel (@donatocapitella).
• Akshat Parikh - Leading Bug Bounty Hunter, AI Security Startup Founder
Former AI security researcher at a startup backed by OpenAI and DeepMind researchers. Ranked Top 21 in JP Morgan's Bug Bounty Hall of Fame and Top 250 in Google's Bug Bounty Hall of Fame—all by the age of 16.
01
CISOs or other executives aiming to incorporate high-level AI security into their business insights and organizational strategies.
02
Business or tech professionals transitioning into AI security seeking practical experience, supplementary training, and certifications
03
Project managers or product owners who want to be more effective at unblocking, supporting, and leading AI-related projects
04
Cybersecurity professionals seeking to master AI/ML red-teaming techniques and professional development opportunities
05
Government and regulatory officers responsible for AI policy who want to understand AI vulnerabilities, security, safety, and risks
Analyze Real-World AI Security Breaches
Study real-world AI security breaches to evaluate risks and develop effective prevention strategies. Gain insights into common vulnerabilities and learn how to mitigate future threats.
Master Advanced AI Red-Teaming Techniques
Gain hands-on experience with prompt injections, jailbreaking, and prompt hacking in the HackAPrompt playground. Learn to identify and exploit AI vulnerabilities, enhancing your offensive security skills to a professional level.
Implement effective defense mechanisms against prompt injections and other adversarial attacks to secure AI/ML systems.
Learn to implement robust defense strategies against prompt injections and adversarial attacks. Secure AI/ML systems by building resilient models and integrating security measures throughout the AI development lifecycle.
Earn an Industry-Recognized Certification
Upon completing the course and passing our AI Red Teaming Professional Certification exam, you'll become AIRTP+ Certified, which validates your expertise, enhances your professional credentials, and positions you as a leader in AI security.
Live sessions
Learn directly from Sander Schulhoff in a real-time, interactive format.
Learn Prompting+ Access
Complimentary access to over 15 comprehensive courses on Prompt Engineering, Prompt Hacking, and other related topics.
AIRT Certification
Receive a voucher to take the AI Red Teaming Professional Certificate Exam for free!
Lifetime access
Go back to course content and recordings whenever you need to.
Community of peers
Stay accountable and share insights with like-minded professionals.
Certificate of completion
Share your new skills with your employer or on LinkedIn.
Maven Guarantee
This course is backed by the Maven Guarantee. Students are eligible for a full refund up until the halfway point of the course.
21 live sessions • 43 lessons • 5 projects
Jul
7
Jul
7
Jul
8
Jul
9
Jul
10
Jul
14
Jul
14
Jul
17
Jul
15
Jul
21
Jul
21
Jul
22
Jul
23
Jul
24
Jul
28
Jul
28
Jul
29
Jul
29
Jul
30
4.4 (66 ratings)
Andy Purdy
Logan Kilpatrick
Alex Blanton
Prompt Hacking Techniques
Defenses for AI Systems
Hands-on Demos
Get the free recording
Join an upcoming cohort
Cohort 4
$1,495
Dates
Payment Deadline
Don't miss out! Enrollment closes in 3 days
Cohort 5
$1,495
Dates
Payment Deadline
Cohort 6
$1,495
Dates
Payment Deadline
4-6 hours per week
Mondays - Live Class Sessions
1:00pm - 2:00pm EST
8 Modules covered over 4 Live sessions. Each session is exercise & participation heavy with plenty of time for Q&A with Sander (& Guest Speakers).
Live Prompt Hacking Sessions
You'll engage in live prompt hacking sessions that will go over weekly projects by hacking state-of-the-art AIs. These sessions will also introduce students to the latest automatic red-teaming tools like PyRIT and Garak.
Weekly projects
2 hours per week
This course is hands-on! You’ll work on structured projects that apply red-teaming techniques to real-world scenarios and participate in guided sessions within the HackAPrompt Playground.
On-Demand Access to Learn Prompting Plus
20 hours+ of On-demand Course Content
Gain On-Demand Access to the AI/ML Red-Teaming Masterclass & Learn Prompting Plus, which includes over 20 hours of courses on ChatGPT (created in partnership with OpenAI), Prompt Engineering, Generative AI, AI Image-Creation, Prompt Hacking, & more.
CEO, Learn Prompting (3M+ Learners), HackAPrompt, & Award-winning AI Researcher
Sander Schulhoff is the Founder of Learn Prompting, the first prompt engineering guide released on the internet (even before ChatGPT launched), and an award-winning AI researcher from the University of Maryland who has authored research with OpenAI, Scale AI, Hugging Face, Stanford, US Federal Reserve, and Microsoft. He is also the co-instructor of "ChatGPT for Everyone," a course created in partnership with OpenAI.
He is the organizer of HackAPrompt, the largest AI Safety competition ever run, in partnership with OpenAI, ScaleAI, and Hugging Face. The competition attracted over 3,000 AI Hackers from around the world and collected 600,000 malicious prompts, making it the largest prompt injection dataset ever collected. It was also the largest competition ever held on the AICrowd platform, surpassing Amazon’s record for most competitors by 50%. His paper from this competition was awarded Best Theme Paper at EMNLP, the leading NLP conference, selected from over 20,000 papers submitted by PhD students and professors worldwide. OpenAI cited this paper in their Instruction Hierarchy, and used the dataset to make their models 30-50% safer from prompt injections (#1 security risk in LLMs).
In his recent research paper, "The Prompt Report," Sander Schulhoff led a team of researchers from OpenAI, Microsoft, Google, and Stanford University to conduct a comprehensive 76-page survey of over 1,500 prompting papers, analyzing the effectiveness of various prompting techniques, Agents, and Generative AI.
Schulhoff has spoken and led workshops at Microsoft, OpenAI, and Stanford University, and his Generative AI courses have trained over 3 million people to date, including thousands at Deloitte, Meta, Microsoft, and more.
Enter your email to get periodic updates about this course
$1,495
4.4 (66)
3 days left to enroll