AI Red Teaming and AI Security Masterclass

4.3 (33)

·

5 Weeks

·

Cohort-based Course

#1 AI Security Course. Learn AI Security from the Creator of HackAPrompt, the Largest AI Security competition ever held, backed by OpenAI

Led AI Security & Prompting workshops at

OpenAI
Microsoft
Stanford University
Dropbox
Deloitte

Course overview

Our AI Systems Are Vulnerable... Learn how to Secure Them!

In 2023, I partnered with OpenAI, ScaleAI, & Hugging Face to launch HackAPrompt—the 1st & Largest Generative AI Red Teaming Competition ever held. Over 3,300 AI hackers competed to bypass model guardrails using prompt injections—the #1 Security Risk in LLMs.


We collected the Largest Dataset of Prompt Injection attacks, which has been used by every major Frontier AI Lab, including OpenAI, who used it to improve their models' resistance to Prompt Injection Attacks by up to 46%.


Today, I've delivered workshops on AI Red Teaming & Prompting at OpenAI, Microsoft, Deloitte, & Stanford University. And because I love to teach... I created this course to teach you everything I know about AI Red Teaming!


About the Course:


This 4-week Masterclass is the #1 AI Security course for Cybersecurity Professionals, AI Trust & Safety leads, AI product managers, and engineers who want to master AI Red Teaming and secure AI systems against real-world threats.


You’ll gain hands-on experience identifying vulnerabilities in Generative AI systems, including prompt injections, jailbreaks, and adversarial attacks. Using the HackAPrompt playground, you’ll practice both attacking and defending AI systems—learning how to break them and how to secure them.


This course is practical, not just theoretical. You’ll work on real-world projects, analyzing live AI systems for vulnerabilities. These projects prepare you for the AI Red Teaming Certified Professional (AIRTP+) Exam, a 24+ hour assessment that validates your AI security expertise.


Our last cohort included 150 professionals from Microsoft, Google, Capital One, IBM, ServiceNow, and Walmart. Graduates passed the certification exam and are now AIRTP+ certified, applying their skills to secure AI systems worldwide.


About Your Instructor:


I’m Sander Schulhoff, the Founder of Learn Prompting & HackAPrompt. In October 2022, I published the 1st Prompt Engineering Guide on the Internet—two months before ChatGPT was released. Since then, my courses have trained over 3 million people, and I’m one of two people (other than Andrew Ng) to partner with OpenAI on a ChatGPT course. I’ve led Prompt Engineering workshops at OpenAI, Microsoft, Stanford, Deloitte, and Dropbox.


I’m an award-winning Generative AI researcher from the University of Maryland and the youngest recipient of the Best Paper Award at EMNLP 2023, the leading NLP Conference, selected out of 20,000 submitted research papers. My research paper on HackAPrompt, Ignore This Title and HackAPrompt, has been by cited by OpenAI in three major research papers: Instruction Hierarchy, Automated Red Teaming, and Adversarial Robustness papers.


I created HackAPrompt, the first and largest Generative AI Red Teaming competition. Most recently, I led a team from OpenAI, Microsoft, Google, and Stanford on The Prompt Report—the most comprehensive study on Prompt Engineering to date. This 76-page survey analyzed over 1,500 academic papers, evaluating the effectiveness of prompting techniques, AI agents, and Generative AI applications.


In addition to myself, you'll learn from top experts in Generative AI, Cybersecurity, and AI Red Teaming in our Pre-Recorded & Live Lectures, who will provide their unique perspective:


• Pliny the Prompter: The most renowned AI Jailbreaker, who has successfully jailbroken every major AI model—including OpenAI’s o1, which hasn’t even been made public! Pliny also jailbroke an AI agent to autonomously sign into Gmail, code ransomware, compress it into a zip file, write a phishing email, attach the payload, and successfully deliver it to a target

Jason Haddix: A bug bounty hunter with over 20-years of experience in cybersecurity as the CISO of Ubisoft, Head of Trust/Security/Operations at Bugcrowd, Director of Penetration Testing at HP, and Lead Penetration Tester at Redspin.

Richard Lundeen: Principal Software Engineering Lead for Microsoft’s AI Red Team and maintainer of Microsoft PyRit. He leads an interdisciplinary team of red teamers, ML researchers, and developers focused on securing AI systems.

Johann Rehberger: Led the creation of a Red Team in Microsoft Azure as a Principal Security Engineering Manager and built Uber’s Red Team. Johann discovered attack vectors like ASCII Smuggling and AI-powered C2 (Command and Control) attacks. He has also found Bug Bounties in OpenAI’s ChatGPT, Microsoft Copilot, GitHub Copilot Chat, Anthropic Claude, and Google Bard/Gemini. Johann will be sharing unreleased research that he hasn’t yet published on his blog, embracethered.com.

• Joseph Thacker: Principal AI Engineer at AppOmni, leading AI research on agentic functionality and retrieval systems. A security researcher specializing in application security and AI, Joseph has submitted over 1,000 vulnerabilities across HackerOne and Bugcrowd. He hacked into Google Bard at their LLM Bug Bounty event and took 1st place in the competition.

Sandy Dunn: A seasoned CISO with over 20 years of experience and the project lead for the OWASP Top 10 Risks for LLM Applications Cybersecurity and Governance. She has led innovative solutions in AI Security and Cybersecurity and is a sought-after speaker and advisor in the field. Sandy's expertise extends to Fractional CISO consulting, leadership roles within prominent cybersecurity organizations and is an Adjunct Cybersecurity Professor at Boise State University.

• Donato Capitella: A researcher with over 12 years of experience in offensive security and security assurance, has gained a following for his AI security work. Alongside his years of research and blogs on WithSecure, he has taught over 300k people about building and breaking AI systems on his YouTube channel (@donatocapitella).

• Valen Tagliabue: An AI researcher, data analyst, and prompt engineer specializing in NLP and cognitive science. His expertise includes LLM evaluation, safety, and alignment, with a strong focus on human-AI collaboration. He was part of the winning team in HackAPrompt2023, an AI safety competition backed by industry leaders like Hugging Face, Scale AI, and OpenAI.

• Akshat Parikh: Former AI security researcher at a startup backed by OpenAI and DeepMind researchers. Ranked Top 21 in JP Morgan’s Bug Bounty Hall of Fame and Top 250 in Google’s Bug Bounty Hall of Fame—all by the age of 16.


*Limited-Time Offer: Enroll now and get complimentary access to Learn Prompting Plus and our AI Red Teaming Certification Exam (a $717 value). You'll get access to over 15 comprehensive courses—including this masterclass and additional courses in Prompt Engineering, Prompt Hacking, & AI/ML Red-Teaming, and a voucher for our AI Red Teaming Professional Certificate Exam (AIRTP+). 


LIMITED SPOTS AVAILABLE

We're keeping this class intentionally small and will cap it at 100 participants so that we can provide more personal attention to each of you to make sure you get the most out of the course. If you're unable to place your order and see the waitlist page, that means we sold out this cohort. If so, please join our waitlist to get notified when we release our next cohort.


Money-Back Guarantee

We genuinely want this course to be transformative for you. You can receive a full refund within 14 days after the course ends, provided you meet the completion criteria in our refund policy. We're confident in the value we provide and stand by our promise to help you level up your AI security expertise.


Interested in an enterprise license so your whole team or company can take the course? Please reach out directly to team@learnprompting.org

Who is this course for:

01

Cybersecurity professionals seeking to master AI/ML red-teaming techniques and expand into AI security.

02

AI Engineers and Developers building AI systems who want to understand and mitigate AI-specific security risks.

03

AI safety and ethics specialists aiming to deepen their expertise in AI vulnerabilities and secure AI deployment.

04

Professionals transitioning into AI security roles, seeking practical skills and certifications in AI/ML red-teaming.

05

AI Product Managers and technical leads needing to understand AI security risks to build secure AI products.

06

CISOs and Security Executives aiming to incorporate AI security into their organizational strategies.

07

Government and Regulatory officials responsible for AI policy who want to understand AI security risks and safeguards.

What you’ll get out of this course

Master Advanced AI Red-Teaming Techniques

Gain hands-on experience with prompt injections, jailbreaking, and prompt hacking in the HackAPrompt playground. Learn to identify and exploit AI vulnerabilities, enhancing your offensive security skills to a professional level.

Design and Execute Real-World Red-Teaming Projects

Apply your knowledge by designing and executing a red-teaming project to exploit vulnerabilities in a live chatbot or your own AI application. This practical experience prepares you for real-world AI security challenges.

Develop and implement effective defense mechanisms against prompt injections and other adversarial attacks to secure AI/ML systems.

Learn to implement robust defense strategies against prompt injections and adversarial attacks. Secure AI/ML systems by building resilient models and integrating security measures throughout the AI development lifecycle.

Analyze Real-World AI Security Breaches

Study real-world AI security breaches to evaluate risks and develop effective prevention strategies. Gain insights into common vulnerabilities and learn how to mitigate future threats.

Learn from Industry Leaders

Benefit from mentorship by Sander Schulhoff and guest lectures from top AI security experts like Akshat Parikh. Gain insider knowledge from professionals at the forefront of AI security.

Network with Like-Minded Professionals

Connect with cybersecurity professionals, AI safety specialists, developers, and executives. Expand your network, collaborate on projects, and join a community committed to securing AI technologies.

Earn an Industry-Recognized Certification

Upon completing the course and passing our AI Red Teaming Professional Certification exam, you'll become AIRTP+ Certified, which validates your expertise, enhances your professional credentials, and positions you as a leader in AI security.

Future-Proof Your Career in AI Security

Equip yourself with cutting-edge skills to stay ahead in the evolving tech landscape. Position yourself at the forefront of AI security, opening new career opportunities as AI transforms industries.

This course includes

16 interactive live sessions

Lifetime access to course materials

31 in-depth lessons

Direct access to instructor

1 projects to apply learning

Guided feedback & reflection

Private community of peers

Course certificate upon completion

Maven Satisfaction Guarantee

This course is backed by Maven’s guarantee. You can receive a full refund within 14 days after the course ends, provided you meet the completion criteria in our refund policy.

Course syllabus

Week 1

Apr 17—Apr 20

    Apr

    17

    Live Session 1: Sander Schulhoff - Introduction to AI Red Teaming and Classical Security

    Thu 4/175:00 PM—6:00 PM (UTC)

    Apr

    17

    Live Prompt Hacking Session 1: Q&A and the First Project

    Thu 4/176:00 PM—7:00 PM (UTC)

    Introduction to Prompt Hacking

    2 items

    Introduction to GenAI Security and Harms

    2 items

    Project Kickoff: Hack HackAPrompt (Intro Track)

    1 item

    Resources and Recommended Reading

    1 item

Week 2

Apr 21—Apr 27

    Apr

    21

    Guest Speaker: Jason Haddix – Mastering AI Security: Real-World Scenarios and Cutting-Edge Methodologies

    Mon 4/215:00 PM—6:00 PM (UTC)

    Apr

    23

    Live Prompt Hacking Session 2: Intro Track Solutions

    Wed 4/236:00 PM—7:00 PM (UTC)

    Apr

    24

    Guest Speaker: Donato Capitella - Hacking LLM Applications: Tales and Techniques from the Industry

    Thu 4/245:00 PM—6:00 PM (UTC)

    Apr

    25

    Live Session 2: Sander Schulhoff - Ignore Your Instructions and HackAPrompt

    Fri 4/255:00 PM—6:00 PM (UTC)

    Module 3: Comprehensive Guide to Prompt Hacking Techniques and Attacks

    3 items

    Module 4: Defense Mechanisms

    4 items

    Project: Hack HackAPrompt

    1 item

Week 3

Apr 28—May 4

    Apr

    28

    Guest Speaker: Valen Tagliabue - Nudge, Trick, Break: Hacking AI by Thinking Like It

    Mon 4/285:00 PM—6:00 PM (UTC)

    Apr

    29

    Live Prompt Hacking Session 3: HackAPrompt 1.0 Solutions

    Tue 4/296:00 PM—7:00 PM (UTC)

    Apr

    30

    Guest Speaker: Joseph Thacker - AI App Attacks

    Wed 4/304:00 PM—5:00 PM (UTC)

    May

    1

    Live Event 3: Sander Schulhoff - Advanced Red-Teaming

    Thu 5/15:00 PM—6:00 PM (UTC)

    May

    2

    Guest Speaker: Sandy Dunn - Lessons from a CISO & OWASP Top 10 LLM Risks

    Fri 5/25:00 PM—6:00 PM (UTC)

    Guest Speaker: Pliny the Prompter - Jailbreaking Every AI Model

    0 items

    Module 5: Advanced Jailbreaking

    5 items

    Module 6: Advanced Prompt Injection

    2 items

    Project: Defeating HackAPrompt

    1 item

Week 4

May 5—May 11

    May

    5

    Live Prompt Hacking Session 4: PyRIT and Garak

    Mon 5/56:00 PM—7:00 PM (UTC)

    May

    6

    Guest Speaker: Richard Lundeen from Microsoft’s AI Red Team & Maintainer of PyRit

    Tue 5/65:00 PM—6:00 PM (UTC)

    May

    8

    Guest Speaker: Akshat Parikh - Adversarial Testing in the AI Era

    Thu 5/85:00 PM—6:00 PM (UTC)

    May

    9

    Live Event 4: Sander Schulhoff - The Future of Red-Teaming

    Fri 5/95:00 PM—6:00 PM (UTC)

    Guest Speaker: Johann Rehberger - SpAIware & More: Advanced Prompt Injection Exp

    1 item

    Module 7: Real-World Harms

    3 items

    Module 8: Physical harms

    3 items

    Project: Hack a Real-World-System

    1 item

Week 5

May 12—May 16

    AI/ML Red-Teaming Certification Exam

    1 item

    May

    16

    Thank You/Networking/Certification Celebration

    Fri 5/161:00 AM—1:30 AM (UTC)

Post-course

4.3 (33 ratings)

What students are saying

What people are saying

        Hands-on teaching and learning. Good intros and opportunity to work through assignments.
Andy Purdy

Andy Purdy

CISO of Huawei
        The folks at https://learnprompting.org do a great job!
Logan Kilpatrick

Logan Kilpatrick

Head of Developer Relations at OpenAI
        Thank you for today’s workshop! We had 1,696 attendees— This is a very high number for our internal community, second only to our keynote at last December’s big conference
Alex Blanton

Alex Blanton

AI/ML Community Lead (Office of CTO) at Microsoft

Meet your instructor

Sander Schulhoff

Sander Schulhoff

CEO, Learn Prompting (3M+ Learners), HackAPrompt, & Award-winning AI Researcher


Sander Schulhoff is the Founder of Learn Prompting, the first prompt engineering guide released on the internet (even before ChatGPT launched), and an award-winning AI researcher from the University of Maryland who has authored research with OpenAI, Scale AI, Hugging Face, Stanford, US Federal Reserve, and Microsoft. He is also the co-instructor of "ChatGPT for Everyone," a course created in partnership with OpenAI.


He is the organizer of HackAPrompt, the largest AI Safety competition ever run, in partnership with OpenAI, ScaleAI, and Hugging Face. The competition attracted over 3,000 AI Hackers from around the world and collected 600,000 malicious prompts, making it the largest prompt injection dataset ever collected. It was also the largest competition ever held on the AICrowd platform, surpassing Amazon’s record for most competitors by 50%. His paper from this competition was awarded Best Theme Paper at EMNLP, the leading NLP conference, selected from over 20,000 papers submitted by PhD students and professors worldwide. OpenAI cited this paper in their Instruction Hierarchy, and used the dataset to make their models 30-50% safer from prompt injections (#1 security risk in LLMs).


In his recent research paper, "The Prompt Report," Sander Schulhoff led a team of researchers from OpenAI, Microsoft, Google, and Stanford University to conduct a comprehensive 76-page survey of over 1,500 prompting papers, analyzing the effectiveness of various prompting techniques, Agents, and Generative AI.


Schulhoff has spoken and led workshops at Microsoft, OpenAI, and Stanford University, and his Generative AI courses have trained over 3 million people to date, including thousands at Deloitte, Meta, Microsoft, and more.

A pattern of wavy dots

Join an upcoming cohort

AI Red Teaming and AI Security Masterclass

Cohort 3

$1,495

Dates

Apr 18—May 16, 2025

Payment Deadline

Apr 18, 2025
Get reimbursed

Course schedule

4-6 hours per week

  • Mondays - Live Class Sessions

    1:00pm - 2:00pm EST

    8 Modules covered over 4 Live sessions. Each session is exercise & participation heavy with plenty of time for Q&A with Sander (& Guest Speakers).

  • 2-3 Modules Per Week (2 Hours)

    You'll engage in hands-on activities and guided sessions covering essential topics. Course content is released weekly, giving you flexibility to complete modules at your own pace.

  • Weekly projects

    2 hours per week

    This course is hands-on! You’ll work on structured projects that apply red-teaming techniques to real-world scenarios and participate in guided sessions within the HackAPrompt Playground.

  • On-Demand Access to Learn Prompting Plus

    20 hours+ of On-demand Course Content

    Gain On-Demand Access to the AI/ML Red-Teaming Masterclass & Learn Prompting Plus, which includes over 20 hours of courses on ChatGPT (created in partnership with OpenAI), Prompt Engineering, Generative AI, AI Image-Creation, Prompt Hacking, & more.

Free resource

Prompt Injections are the #1 Security Risk in LLMs… We created a list of the 28 different Prompt Injection techniques that you need to know!

We collected a dataset of over 600,000 prompt injections and developed a taxonomy of the 28 different Prompt Injection techniques that you must know to deploy secure AI models.

I want this list!

Learning is better with cohorts

Learning is better with cohorts

Active hands-on learning

This course builds on live workshops and hands-on projects

Interactive and project-based

You’ll be interacting with other learners through breakout rooms and project teams

Learn with a cohort of peers

Join a community of like-minded people who want to learn and grow alongside you

Frequently Asked Questions

Stay in the loop

Sign up to be the first to know about course updates.

A pattern of wavy dots

Join an upcoming cohort

AI Red Teaming and AI Security Masterclass

Cohort 3

$1,495

Dates

Apr 18—May 16, 2025

Payment Deadline

Apr 18, 2025
Get reimbursed

$1,495

4.3 (33)

·

5 Weeks