6 Weeks
·Cohort-based Course
#1 AI Safety Course. Learn AI Security from creator of HackAPrompt, the Largest AI Safety competition ever run (backed by OpenAI & ScaleAI)
This course is popular
17 people enrolled last week.
6 Weeks
·Cohort-based Course
#1 AI Safety Course. Learn AI Security from creator of HackAPrompt, the Largest AI Safety competition ever run (backed by OpenAI & ScaleAI)
This course is popular
17 people enrolled last week.
Taught Prompt Engineering workshops at
Course overview
You’ll learn:
• To identify and exploit vulnerabilities in Generative AI systems like prompt injections, jailbreaks, and adversarial attacks.
• Robust defense mechanisms to secure AI/ML systems.
• Hands-on skills through real-world projects in the HackAPrompt Playground.
About the Instructor:
Sander Schulhoff
CEO, Learn Prompting | HackAPrompt Organizer | Award-winning AI Researcher
Sander is the Founder of Learn Prompting, the first prompt engineering guide released on the internet (even before ChatGPT launched), and has led Prompt Engineering workshops at OpenAI & Microsoft. He's an award-winning Generative AI researcher from the University of Maryland, and has authored research with OpenAI, Scale AI, Hugging Face, Stanford, The US Federal Reserve, and Microsoft. He is also the co-instructor of "ChatGPT for Everyone," a course created in partnership with OpenAI.
He organized HackAPrompt, the largest Generative AI Red Teaming competition ever, in partnership with OpenAI, ScaleAI, & Hugging Face. Over 3,000 GenAI Hackers competed, and he collected 600,000 malicious prompts, making it the largest prompt injection dataset ever collected. His post-competition paper was awarded Best Theme Paper at EMNLP 2023, the leading NLP conference, out of 20,000 submitted papers, competing against researchers from around the world.
This paper was cited by OpenAI in their Instruction Hierarchy paper, who used the dataset to make their models 30-46% more resistant to prompt injections (#1 security risk in LLMs).
Sander also recently led a team of researchers from OpenAI, Microsoft, Google, and Stanford University on The Prompt Report, the most comprehensive paper on prompting. This 76-page survey analyzed over 1,500 prompting papers, assessing the effectiveness of various prompting techniques, AI Agents, and Generative AI.
Sander has spoken and led workshops at Microsoft (one of their highest-attended internal workshops in 2024), OpenAI, and Stanford University. His Generative AI courses have trained over 3 million people, including thousands at Deloitte, Meta, Microsoft, and more.
About the Course:
This 6-week masterclass is the #1 AI Safety Course designed specifically for Cybersecurity Professionals, AI Safety Specialists, AI Product Managers, and GenAI Developers aiming to master AI Red-Teaming. Learn directly from Sander Schulhoff, the creator of HackAPrompt, alongside industry leaders shaping the future of AI Safety.
You'll learn from industry leaders about the vulnerabilities of Generative AI systems, including prompt injections, jailbreaks, and other adversarial attacks. We’ll be doing hands-on exercises in the HackAPrompt playground, so you can practice attacking (and defending) AI models in a controlled environment. The course covers everything from understanding Generative AI threat landscapes to building strong defense mechanisms and ensuring compliance with security standards.
You’ll also work on a capstone project to expose vulnerabilities in a live chatbot or your own AI application, putting your skills to the test. Get direct mentorship from me, Sander Schulhoff, along with guest lectures from top experts in Generative AI security, and connect with others in the AI/ML red-teaming community along the way.
In addition to Sander, this course will feature guest speakers in Generative AI and cybersecurity who will share real-world opportunities to apply your new AI/ML Red-Teaming skills:
• Pliny the Prompter: The most renowned AI Jailbreaker, who has successfully jailbroken every AI model released to date—including OpenAI’s o1, which hasn’t even been made public!
• Akshat Parikh: Ex-AI security researcher at a startup backed by OpenAI and DeepMind researchers, Top 21 in JP Morgan’s Bug Bounty Hall of Fame, and Top 250 in Google’s Bug Bounty Hall of Fame... at 17 years old!
• Joseph Thacker: Principal AI Engineer at AppOmni, security researcher specializing in application security and AI, with over 1,000 vulnerabilities submitted across HackerOne and Bugcrowd.
• More Guest Speakers To Be Announced: Stay tuned for announcements about additional industry leaders."
Plus free access to Learn Prompting Plus (a $549 value): Gain immediate access to over 15 comprehensive courses—including this masterclass and additional courses in Prompt Engineering, Prompt Hacking, & AI/ML Red-Teaming (valued at $299), and a voucher for the Learn Prompting AI/ML Red-Teaming Certificate Exam (valued at $249).
Exclusive Benefit: Upon completing our course and passing the AI/ML Red-Teaming Certification exam, you'll be added to a special job board on our website, giving you access to exclusive red-teaming and AI security job opportunities.
LIMITED SPOTS AVAILABLE
We're keeping this class intentionally small and will cap it at 100 participants so that we can provide more personal attention to each of you to make sure you get the most out of the course. If you're unable to place your order and see the waitlist page, that means we sold out this cohort. If so, please join our waitlist to get notified when we release our next cohort.
Money-Back Guarantee
We genuinely want this course to be transformative for you. You can receive a full refund within 14 days after the course ends, provided you meet the completion criteria in our refund policy. We're confident in the value we provide and stand by our promise to help you level up your AI security expertise.
Interested in an enterprise license so your whole team or company can take the course? Please reach out directly to team@learnprompting.org
01
Cybersecurity professionals seeking to master AI/ML red-teaming techniques and expand into AI security.
02
Developers and engineers building AI systems who want to understand and mitigate AI-specific security risks.
03
AI safety and ethics specialists aiming to deepen their expertise in AI vulnerabilities and secure AI deployment.
04
Professionals transitioning into AI security roles, seeking practical skills and certifications in AI/ML red-teaming.
05
AI Product Managers and technical leads needing to understand AI security risks to build secure AI products.
06
CISOs and Security Executives aiming to incorporate AI security into their organizational strategies.
07
Government and Regulatory officials responsible for AI policy who want to understand AI security risks and safeguards.
Master Advanced AI Red-Teaming Techniques
Gain hands-on experience with prompt injections, jailbreaking, and prompt hacking in the HackAPrompt playground. Learn to identify and exploit AI vulnerabilities, enhancing your offensive security skills to a professional level.
Design and Execute Real-World Red-Teaming Projects
Apply your knowledge by designing and executing a red-teaming project to exploit vulnerabilities in a live chatbot or your own AI application. This practical experience prepares you for real-world AI security challenges.
Develop and implement effective defense mechanisms against prompt injections and other adversarial attacks to secure AI/ML systems.
Learn to implement robust defense strategies against prompt injections and adversarial attacks. Secure AI/ML systems by building resilient models and integrating security measures throughout the AI development lifecycle.
Analyze Real-World AI Security Breaches
Study real-world AI security breaches to evaluate risks and develop effective prevention strategies. Gain insights into common vulnerabilities and learn how to mitigate future threats.
Learn from Industry Leaders
Benefit from mentorship by Sander Schulhoff and guest lectures from top AI security experts like Akshat Parikh. Gain insider knowledge from professionals at the forefront of AI security.
Network with Like-Minded Professionals
Connect with cybersecurity professionals, AI safety specialists, developers, and executives. Expand your network, collaborate on projects, and join a community committed to securing AI technologies.
Earn an Industry-Recognized Certification
Upon completing the course and passing the exam, receive a prestigious Certificate in AI/ML Red-Teaming. This certification validates your expertise, enhances your professional credentials, and positions you as a leader in AI security.
Future-Proof Your Career in AI Security
Equip yourself with cutting-edge skills to stay ahead in the evolving tech landscape. Position yourself at the forefront of AI security, opening new career opportunities as AI transforms industries.
6 interactive live sessions
Lifetime access to course materials
30 in-depth lessons
Direct access to instructor
Projects to apply learnings
Guided feedback & reflection
Private community of peers
Course certificate upon completion
Maven Satisfaction Guarantee
This course is backed by Maven’s guarantee. You can receive a full refund within 14 days after the course ends, provided you meet the completion criteria in our refund policy.
AI Red-Teaming and AI Safety: Masterclass
Dec
5
Dec
12
Dec
19
Jan
6
Jan
10
Jan
12
Andy Purdy
Logan Kilpatrick
Alex Blanton
CEO, Learn Prompting (3M+ Learners), HackAPrompt, & Award-winning AI Researcher
Sander Schulhoff is the Founder of Learn Prompting, the first prompt engineering guide released on the internet (even before ChatGPT launched), and an award-winning AI researcher from the University of Maryland who has authored research with OpenAI, Scale AI, Hugging Face, Stanford, US Federal Reserve, and Microsoft. He is also the co-instructor of "ChatGPT for Everyone," a course created in partnership with OpenAI.
He is the organizer of HackAPrompt, the largest AI Safety competition ever run, in partnership with OpenAI, ScaleAI, and Hugging Face. The competition attracted over 3,000 AI Hackers from around the world and collected 600,000 malicious prompts, making it the largest prompt injection dataset ever collected. It was also the largest competition ever held on the AICrowd platform, surpassing Amazon’s record for most competitors by 50%. His paper from this competition was awarded Best Theme Paper at EMNLP, the leading NLP conference, selected from over 20,000 papers submitted by PhD students and professors worldwide. OpenAI cited this paper in their Instruction Hierarchy, and used the dataset to make their models 30-50% safer from prompt injections (#1 security risk in LLMs).
In his recent research paper, "The Prompt Report," Sander Schulhoff led a team of researchers from OpenAI, Microsoft, Google, and Stanford University to conduct a comprehensive 76-page survey of over 1,500 prompting papers, analyzing the effectiveness of various prompting techniques, Agents, and Generative AI.
Schulhoff has spoken and led workshops at Microsoft, OpenAI, and Stanford University, and his Generative AI courses have trained over 3 million people to date, including thousands at Deloitte, Meta, Microsoft, and more.
Join an upcoming cohort
Cohort 1
$1,200
Dates
Payment Deadline
4-6 hours per week
Mondays - Live Class Sessions
1:00pm - 2:00pm EST
8 Modules covered over 6 Live sessions. Each session is exercise & participation heavy with plenty of time for Q&A with Sander (& Guest Speakers).
2-3 Modules Per Week (2 Hours)
You'll engage in hands-on activities and guided sessions covering essential topics. Course content is released weekly, giving you flexibility to complete modules at your own pace.
Weekly projects
2 hours per week
This course is hands-on! You’ll work on structured projects that apply red-teaming techniques to real-world scenarios and participate in guided sessions within the HackAPrompt Playground.
On-Demand Access to Learn Prompting Plus
20 hours+ of On-demand Course Content
Gain On-Demand Access to the AI/ML Red-Teaming Masterclass & Learn Prompting Plus, which includes over 20 hours of courses on ChatGPT (created in partnership with OpenAI), Prompt Engineering, Generative AI, AI Image-Creation, Prompt Hacking, & more.
Prompt Injections are the #1 Security Risk in LLMs… We created a list of the 28 different Prompt Injection techniques that you need to know!
We collected a dataset of over 600,000 prompt injections and developed a taxonomy of the 28 different Prompt Injection techniques that you must know to deploy secure AI models.
I want this list!
Active hands-on learning
This course builds on live workshops and hands-on projects
Interactive and project-based
You’ll be interacting with other learners through breakout rooms and project teams
Learn with a cohort of peers
Join a community of like-minded people who want to learn and grow alongside you
Sign up to be the first to know about course updates.
Join an upcoming cohort
Cohort 1
$1,200
Dates
Payment Deadline