4.3 (32)
5 Weeks
·Cohort-based Course
#1 AI Security Course. Learn AI Security from the Creator of HackAPrompt, the Largest AI Security competition ever held, backed by OpenAI
This course is popular
5 people enrolled last week.
4.3 (32)
5 Weeks
·Cohort-based Course
#1 AI Security Course. Learn AI Security from the Creator of HackAPrompt, the Largest AI Security competition ever held, backed by OpenAI
This course is popular
5 people enrolled last week.
Led AI Security & Prompting workshops at
Course overview
In 2023, I partnered with OpenAI, ScaleAI, & Hugging Face to launch HackAPrompt—the 1st & Largest Generative AI Red Teaming Competition ever held. Over 3,300 AI hackers competed to bypass model guardrails using prompt injections—the #1 Security Risk in LLMs.
The result? The Largest Dataset of Prompt Injection attacks ever collected—now used by every major Frontier AI Lab, including OpenAI, which used it to increase their models' resistance to Prompt Injection Attacks by up to 46%.
My research paper on HackAPrompt, Ignore This Title and HackAPrompt, was awarded Best Theme Paper at EMNLP 2023, one of the world’s leading NLP conferences, selected out of 20,000 papers. Since then, OpenAI has cited it in three major research papers: Instruction Hierarchy, Automated Red Teaming, and Adversarial Robustness papers.
Today, I've delivered talks on HackAPrompt, Prompt Engineering, and AI Red Teaming at OpenAI, Stanford University, Dropbox, Deloitte, and Microsoft. And because I love to teach... I created this course so I can teach you everything I know about AI Red Teaming!
About the Course:
This 6-week Masterclass is the #1 AI Security course for Cybersecurity Professionals, AI Trust & Safety leads, AI product managers, and engineers who want to master AI Red Teaming and secure AI systems against real-world threats.
You’ll gain hands-on experience identifying vulnerabilities in Generative AI systems, including prompt injections, jailbreaks, and adversarial attacks. Using the HackAPrompt playground, you’ll practice both attacking and defending AI systems—learning how to break them and how to secure them.
This course is practical, not just theoretical. You’ll work on real-world projects, analyzing live AI systems for vulnerabilities. These projects prepare you for the AI Red Teaming Certified Professional (AIRTP+) Exam, a 24+ hour assessment that validates your AI security expertise.
Our last cohort included 150 professionals from Microsoft, Google, Meta, Capital One, IBM, ServiceNow, and Walmart. Graduates passed the certification exam and are now AIRTP+ certified, applying their skills to secure AI systems worldwide.
About Your Instructor:
I’m Sander Schulhoff, the Founder of Learn Prompting. In October 2022, I published the 1st Prompt Engineering Guide on the Internet—two months before ChatGPT was released. Since then, my courses have trained over 3 million people, and I’m one of only two people (other than Andrew Ng) to partner with OpenAI on a ChatGPT course. I’ve also led Prompt Engineering workshops at OpenAI, Microsoft, Stanford, Deloitte, and Dropbox.
I’m an award-winning Generative AI researcher from the University of Maryland and the youngest recipient of the Best Paper Award at EMNLP 2023, the leading NLP Conference, selected out of 20,000 submitted research papers from PhDs around the world. I’ve co-authored research with OpenAI, Scale AI, Hugging Face, Stanford, the U.S. Federal Reserve, and Microsoft.
I also created HackAPrompt, the first and largest Generative AI Red Teaming competition. Most recently, I led a team from OpenAI, Microsoft, Google, and Stanford on The Prompt Report—the most comprehensive study on Prompt Engineering to date. This 76-page survey analyzed over 1,500 academic papers, evaluating the effectiveness of prompting techniques, AI agents, and Generative AI applications.
In addition to myself, you'll learn from top experts in Generative AI, Cybersecurity, and AI Red Teaming in our Pre-Recorded & Live Lectures, who will provide their unique perspective:
• Pliny the Prompter: The most renowned AI Jailbreaker, who has successfully jailbroken every major AI model—including OpenAI’s o1, which hasn’t even been made public! Pliny also jailbroke an AI agent to autonomously sign into Gmail, code ransomware, compress it into a zip file, write a phishing email, attach the payload, and successfully deliver it to a target
• Johann Rehberger: Led the creation of a Red Team in Microsoft Azure as a Principal Security Engineering Manager and built Uber’s Red Team. Johann discovered attack vectors like ASCII Smuggling and AI-powered C2 (Command and Control) attacks. He has also found Bug Bounties in OpenAI’s ChatGPT, Microsoft Copilot, GitHub Copilot Chat, Anthropic Claude, and Google Bard/Gemini. Johann will be sharing unreleased research that he hasn’t yet published on his blog, embracethered.com.
• Joseph Thacker: Principal AI Engineer at AppOmni, leading AI research on agentic functionality and retrieval systems. A security researcher specializing in application security and AI, Joseph has submitted over 1,000 vulnerabilities across HackerOne and Bugcrowd. He hacked into Google Bard at their LLM Bug Bounty event and took 1st place in the competition.
• Akshat Parikh: Former AI security researcher at a startup backed by OpenAI and DeepMind researchers. Ranked Top 21 in JP Morgan’s Bug Bounty Hall of Fame and Top 250 in Google’s Bug Bounty Hall of Fame—all by the age of 16.
• Richard Lundeen: Principal Software Engineering Lead for Microsoft’s AI Red Team and maintainer of Microsoft PyRit. He leads an interdisciplinary team of red teamers, ML researchers, and developers focused on securing AI systems.
• Sandy Dunn: A seasoned CISO with 20+ years of experience in healthcare. Project lead for the OWASP Top 10 Risks for LLM Applications Cybersecurity and Governance
*Limited-Time Offer: Enroll now and get complimentary access to Learn Prompting Plus and our AI Red Teaming Certification Exam (a $717 value). You'll get access to over 15 comprehensive courses—including this masterclass and additional courses in Prompt Engineering, Prompt Hacking, & AI/ML Red-Teaming, and a voucher for our AI Red Teaming Professional Certificate Exam (AIRTP+).
LIMITED SPOTS AVAILABLE
We're keeping this class intentionally small and will cap it at 100 participants so that we can provide more personal attention to each of you to make sure you get the most out of the course. If you're unable to place your order and see the waitlist page, that means we sold out this cohort. If so, please join our waitlist to get notified when we release our next cohort.
Money-Back Guarantee
We genuinely want this course to be transformative for you. You can receive a full refund within 14 days after the course ends, provided you meet the completion criteria in our refund policy. We're confident in the value we provide and stand by our promise to help you level up your AI security expertise.
Interested in an enterprise license so your whole team or company can take the course? Please reach out directly to team@learnprompting.org
01
Cybersecurity professionals seeking to master AI/ML red-teaming techniques and expand into AI security.
02
AI Engineers and Developers building AI systems who want to understand and mitigate AI-specific security risks.
03
AI safety and ethics specialists aiming to deepen their expertise in AI vulnerabilities and secure AI deployment.
04
Professionals transitioning into AI security roles, seeking practical skills and certifications in AI/ML red-teaming.
05
AI Product Managers and technical leads needing to understand AI security risks to build secure AI products.
06
CISOs and Security Executives aiming to incorporate AI security into their organizational strategies.
07
Government and Regulatory officials responsible for AI policy who want to understand AI security risks and safeguards.
Master Advanced AI Red-Teaming Techniques
Gain hands-on experience with prompt injections, jailbreaking, and prompt hacking in the HackAPrompt playground. Learn to identify and exploit AI vulnerabilities, enhancing your offensive security skills to a professional level.
Design and Execute Real-World Red-Teaming Projects
Apply your knowledge by designing and executing a red-teaming project to exploit vulnerabilities in a live chatbot or your own AI application. This practical experience prepares you for real-world AI security challenges.
Develop and implement effective defense mechanisms against prompt injections and other adversarial attacks to secure AI/ML systems.
Learn to implement robust defense strategies against prompt injections and adversarial attacks. Secure AI/ML systems by building resilient models and integrating security measures throughout the AI development lifecycle.
Analyze Real-World AI Security Breaches
Study real-world AI security breaches to evaluate risks and develop effective prevention strategies. Gain insights into common vulnerabilities and learn how to mitigate future threats.
Learn from Industry Leaders
Benefit from mentorship by Sander Schulhoff and guest lectures from top AI security experts like Akshat Parikh. Gain insider knowledge from professionals at the forefront of AI security.
Network with Like-Minded Professionals
Connect with cybersecurity professionals, AI safety specialists, developers, and executives. Expand your network, collaborate on projects, and join a community committed to securing AI technologies.
Earn an Industry-Recognized Certification
Upon completing the course and passing our AI Red Teaming Professional Certification exam, you'll become AIRTP+ Certified, which validates your expertise, enhances your professional credentials, and positions you as a leader in AI security.
Future-Proof Your Career in AI Security
Equip yourself with cutting-edge skills to stay ahead in the evolving tech landscape. Position yourself at the forefront of AI security, opening new career opportunities as AI transforms industries.
11 interactive live sessions
Lifetime access to course materials
32 in-depth lessons
Direct access to instructor
1 projects to apply learning
Guided feedback & reflection
Private community of peers
Course certificate upon completion
Maven Satisfaction Guarantee
This course is backed by Maven’s guarantee. You can receive a full refund within 14 days after the course ends, provided you meet the completion criteria in our refund policy.
AI Red Teaming and AI Security Masterclass
Mar
5
Mar
12
Mar
12
Mar
13
Mar
19
Mar
19
Mar
19
Mar
27
Apr
2
Apr
3
Apr
7
4.3 (32 ratings)
Andy Purdy
Logan Kilpatrick
Alex Blanton
CEO, Learn Prompting (3M+ Learners), HackAPrompt, & Award-winning AI Researcher
Sander Schulhoff is the Founder of Learn Prompting, the first prompt engineering guide released on the internet (even before ChatGPT launched), and an award-winning AI researcher from the University of Maryland who has authored research with OpenAI, Scale AI, Hugging Face, Stanford, US Federal Reserve, and Microsoft. He is also the co-instructor of "ChatGPT for Everyone," a course created in partnership with OpenAI.
He is the organizer of HackAPrompt, the largest AI Safety competition ever run, in partnership with OpenAI, ScaleAI, and Hugging Face. The competition attracted over 3,000 AI Hackers from around the world and collected 600,000 malicious prompts, making it the largest prompt injection dataset ever collected. It was also the largest competition ever held on the AICrowd platform, surpassing Amazon’s record for most competitors by 50%. His paper from this competition was awarded Best Theme Paper at EMNLP, the leading NLP conference, selected from over 20,000 papers submitted by PhD students and professors worldwide. OpenAI cited this paper in their Instruction Hierarchy, and used the dataset to make their models 30-50% safer from prompt injections (#1 security risk in LLMs).
In his recent research paper, "The Prompt Report," Sander Schulhoff led a team of researchers from OpenAI, Microsoft, Google, and Stanford University to conduct a comprehensive 76-page survey of over 1,500 prompting papers, analyzing the effectiveness of various prompting techniques, Agents, and Generative AI.
Schulhoff has spoken and led workshops at Microsoft, OpenAI, and Stanford University, and his Generative AI courses have trained over 3 million people to date, including thousands at Deloitte, Meta, Microsoft, and more.
Join an upcoming cohort
Cohort 2
$1,495
Dates
Payment Deadline
4-6 hours per week
Mondays - Live Class Sessions
1:00pm - 2:00pm EST
8 Modules covered over 6 Live sessions. Each session is exercise & participation heavy with plenty of time for Q&A with Sander (& Guest Speakers).
2-3 Modules Per Week (2 Hours)
You'll engage in hands-on activities and guided sessions covering essential topics. Course content is released weekly, giving you flexibility to complete modules at your own pace.
Weekly projects
2 hours per week
This course is hands-on! You’ll work on structured projects that apply red-teaming techniques to real-world scenarios and participate in guided sessions within the HackAPrompt Playground.
On-Demand Access to Learn Prompting Plus
20 hours+ of On-demand Course Content
Gain On-Demand Access to the AI/ML Red-Teaming Masterclass & Learn Prompting Plus, which includes over 20 hours of courses on ChatGPT (created in partnership with OpenAI), Prompt Engineering, Generative AI, AI Image-Creation, Prompt Hacking, & more.
Prompt Injections are the #1 Security Risk in LLMs… We created a list of the 28 different Prompt Injection techniques that you need to know!
We collected a dataset of over 600,000 prompt injections and developed a taxonomy of the 28 different Prompt Injection techniques that you must know to deploy secure AI models.
I want this list!
Active hands-on learning
This course builds on live workshops and hands-on projects
Interactive and project-based
You’ll be interacting with other learners through breakout rooms and project teams
Learn with a cohort of peers
Join a community of like-minded people who want to learn and grow alongside you
Sign up to be the first to know about course updates.
Join an upcoming cohort
Cohort 2
$1,495
Dates
Payment Deadline