lenny_fire
Featured in
Lenny’s List

AI Red Teaming and AI Security Masterclass

4.4 (66)

·

4 Weeks

·

Cohort-based Course

#1 AI Security Course. Brought to you by the team behind Learn Prompting & HackAPrompt, backed by OpenAI.

This course is popular

9 people enrolled last week.

Led AI Prompting & Security workshops at

OpenAI
Microsoft
Stanford University
Dropbox
Deloitte

Course overview

Your AI Systems Are Vulnerable... Learn how to Secure Them!

Prompt injection attacks are the #1 security vulnerability in enterprise AI systems. Attackers can manipulate your AI chatbots to steal sensitive data, access internal documents, and bypass your safety guardrails. Most organizations using AI don't even know they're at risk.


In 2023, I partnered with OpenAI to run HackAPrompt, the 1st Generative AI Red Teaming Competition ever, to study how the AI models can be hijacked. My research has since been used by teams at OpenAI, Google DeepMind, Meta, and Anthropic to improve their model's resistance to prompt injections. Most notably, OpenAI used it to make their model's up to 46% more secure.


Now, I advise Governments & Enterprises how to secure their AI Systems, including teams at OpenAI, Microsoft, Harvard University, Deloitte, Stanford University, and more. And because I love to teach... I created this Masterclass to teach you everything I know about AI Red Teaming!


🛡️ About This Course:

This On-Demand Masterclass is the #1 AI Security course for Cybersecurity Professionals, AI Trust & Safety leads, AI Product Managers, and AI Engineers who want to secure their AI systems against real-world threats.


You'll gain hands-on experience identifying vulnerabilities in Generative AI systems, including prompt injections, jailbreaks, and adversarial attacks. Using the HackAPrompt playground, you'll practice both attacking and defending AI systems, learning how to break them and how to secure them.


This course is practical, not just theoretical. You'll work on real-world projects, analyzing live AI systems for vulnerabilities. These projects prepare you for the AI Red Teaming Certified Professional (AIRTP+) Exam, a 24+ hour assessment that validates your AI security expertise. We've trained 1000's of professionals from Microsoft, Google, Capital One, IBM, ServiceNow, and Walmart. Graduates passed the certification exam and are now AIRTP+ certified, applying their skills to secure AI systems worldwide.


👨‍🏫 About Your Instructor:

I'm Sander Schulhoff. I'm an award-winning AI researcher and the youngest-ever recipient of the Best Paper Award at EMNLP 2023, selected from over 20,000 submissions. My research on AI security has been cited by OpenAI, Anthropic, DeepMind, Microsoft, Google, and NIST to improve their models.


I published the internet's 1st Prompt Engineering Guide two months before ChatGPT launched. Since then, I've partnered with OpenAI on a ChatGPT course and taught over 3 million people how to use Generative AI. Most recently, I led researchers from OpenAI, Microsoft, Google, and Stanford on The Prompt Report, the most comprehensive study on Prompt Engineering to date.


🌟 Learn from the World's Top AI Security Experts:

Pliny the Prompter - The World's Most Famous AI Jailbreaker

The most renowned AI Jailbreaker, who has successfully jailbroken every major AI model—including OpenAI's o1, which hasn't even been made public!

Joseph Thacker - Solo Founder & Top Bug Bounty Hunter

As a solo founder and security researcher, Joseph has submitted over 1,000 vulnerabilities across HackerOne and Bugcrowd.

Jason Haddix - Former CISO of Ubisoft

Bug bounty hunter with over 20-years of experience in cybersecurity as the CISO of Ubisoft and Director of Penetration Testing at HP.

Valen Tagliabue - Winner of HackAPrompt 1.0

An AI researcher specializing in NLP and cognitive science. Part of the winning team in HackAPrompt 1.0.

David Williams-King - AI Researcher under Turing Prize Recipient

Research scientist at MILA, researching under Turing Prize winning Yoshua Bengio on his Safe AI For Humanity team.

Leonard Tang - Founder and CEO of Haize Labs

NYC-based AI safety startup providing cutting-edge evaluation tools to leading companies like OpenAI and Anthropic.

Richard Lundeen - Microsoft's AI Red Team Lead

Principal Software Engineering Lead for Microsoft's AI Red Team and maintainer of Microsoft PyRit.

Johann Rehberger - Founded a Red Team at Microsoft

Built Uber's Red Team and discovered attack vectors like ASCII Smuggling. Found Bug Bounties in every major Gen AI model.

Sandy Dunn - CISO with 20+ years of Experience in Healthcare

A seasoned CISO with 20+ years of experience in healthcare. Project lead for the OWASP Top 10 Risks for LLM Applications Cybersecurity and Governance.

Donato Capitella - AI Security Blogger and Youtuber

With over 12 years of experience in offensive security and security assurance, Donato has gained a following for his AI security work. Alongside his blogs on WithSecure, he has taught over 300k people about building and breaking AI systems on his YouTube channel (@donatocapitella).

Akshat Parikh - Leading Bug Bounty Hunter, AI Security Startup Founder

Former AI security researcher at a startup backed by OpenAI and DeepMind researchers. Ranked Top 21 in JP Morgan's Bug Bounty Hall of Fame and Top 250 in Google's Bug Bounty Hall of Fame—all by the age of 16.

Who is this course for:

01

CISOs or other executives aiming to incorporate high-level AI security into their business insights and organizational strategies.

02

Business or tech professionals transitioning into AI security seeking practical experience, supplementary training, and certifications

03

Project managers or product owners who want to be more effective at unblocking, supporting, and leading AI-related projects

04

Cybersecurity professionals seeking to master AI/ML red-teaming techniques and professional development opportunities

05

Government and regulatory officers responsible for AI policy who want to understand AI vulnerabilities, security, safety, and risks

What you’ll get out of this course

Analyze Real-World AI Security Breaches

Study real-world AI security breaches to evaluate risks and develop effective prevention strategies. Gain insights into common vulnerabilities and learn how to mitigate future threats.

Master Advanced AI Red-Teaming Techniques

Gain hands-on experience with prompt injections, jailbreaking, and prompt hacking in the HackAPrompt playground. Learn to identify and exploit AI vulnerabilities, enhancing your offensive security skills to a professional level.

Implement effective defense mechanisms against prompt injections and other adversarial attacks to secure AI/ML systems.

Learn to implement robust defense strategies against prompt injections and adversarial attacks. Secure AI/ML systems by building resilient models and integrating security measures throughout the AI development lifecycle.

Earn an Industry-Recognized Certification

Upon completing the course and passing our AI Red Teaming Professional Certification exam, you'll become AIRTP+ Certified, which validates your expertise, enhances your professional credentials, and positions you as a leader in AI security.

What’s included

Sander Schulhoff

Live sessions

Learn directly from Sander Schulhoff in a real-time, interactive format.

Learn Prompting+ Access

Complimentary access to over 15 comprehensive courses on Prompt Engineering, Prompt Hacking, and other related topics.

AIRT Certification

Receive a voucher to take the AI Red Teaming Professional Certificate Exam for free!

Lifetime access

Go back to course content and recordings whenever you need to.

Community of peers

Stay accountable and share insights with like-minded professionals.

Certificate of completion

Share your new skills with your employer or on LinkedIn.

Maven Guarantee

This course is backed by the Maven Guarantee. Students are eligible for a full refund up until the halfway point of the course.

Course syllabus

21 live sessions • 43 lessons • 5 projects

Week 1

Jul 7—Jul 13

    Jul

    7

    Live Session 1: Sander Schulhoff - Introduction to AI Red Teaming and Classical Security

    Mon 7/75:00 PM—6:00 PM (UTC)

    Jul

    7

    Office Hours

    Mon 7/76:00 PM—7:00 PM (UTC)

    Introduction to AI Systems

    5 items

    Introduction to AI Vulnerabilities and Exploits

    4 items

    Traditional Cybersecurity vs. AI Security

    4 items

    Jul

    8

    Live Prompt Hacking & Project Review

    Tue 7/85:00 PM—5:30 PM (UTC)

    Jul

    9

    Guest Speaker: Sandy Dunn - Lessons from a CISO & OWASP Top 10 LLM Risks

    Wed 7/95:00 PM—6:00 PM (UTC)

    Jul

    10

    Guest Speaker: Joseph Thacker - AI App Attacks

    Thu 7/105:00 PM—6:00 PM (UTC)

    Guest Speaker: Pliny the Prompter - Jailbreaking Every AI Model

    1 item

    👾 Project Kickoff: Hack HackAPrompt (Intro Track)

    1 item

    💬 Week 1: Retrospective

    1 item

    📚 Supplementary Resources

    2 items

Week 2

Jul 14—Jul 20

    Jul

    14

    Live Session 2: Sander Schulhoff - Ignore Your Instructions and HackAPrompt

    Mon 7/145:00 PM—6:00 PM (UTC)

    Jul

    14

    Office Hours

    Mon 7/146:00 PM—7:00 PM (UTC)

    Jul

    17

    Guest Speaker: Valen Tagliabue - Nudge, Trick, Break: Hacking AI by Thinking Like It

    Thu 7/175:00 PM—6:00 PM (UTC)

    Jul

    15

    Live Prompt Hacking & Project Review

    Tue 7/155:00 PM—6:00 PM (UTC)

    Guest Speaker: Donato Capitella - Hacking LLM Applications: Tales and Techniques

    1 item

    Comprehensive Guide to Prompt Hacking Techniques

    2 items

    Harms in AI-Red Teaming

    4 items

    👾 Project: Hack HackAPrompt 1.0

    1 item

    📚 Resources and Recommended Reading

    1 item

    💬 Week 2: Retrospective

    1 item

Week 3

Jul 21—Jul 27

    Jul

    21

    Live Session 3: Sander Schulhoff - Advanced Red-Teaming

    Mon 7/215:00 PM—6:00 PM (UTC)

    Jul

    21

    Office Hours

    Mon 7/216:00 PM—7:00 PM (UTC)

    Jul

    22

    Live Prompt Hacking & Project Review

    Tue 7/225:00 PM—6:00 PM (UTC)

    Jul

    23

    Guest Speaker: Leonard Tang, CEO of Haize Labs - Frontiers of Red-Teaming

    Wed 7/235:00 PM—6:00 PM (UTC)

    Jul

    24

    Guest Speaker: David Williams-King - How AI Impacts Traditional Cybersecurity

    Thu 7/245:00 PM—6:00 PM (UTC)

    AI Defense

    5 items

    State of the Industry

    4 items

    👾 Project: Healthcare Portal

    1 item

    📚 Resources and Recommended Reading

    1 item

    💬 Week 3: Retrospective

    1 item

Week 4

Jul 28—Aug 2

    Jul

    28

    Live Session 4: Sander Schulhoff - The Future of Red-Teaming

    Mon 7/285:00 PM—6:00 PM (UTC)

    Jul

    28

    Office Hours

    Mon 7/286:00 PM—7:00 PM (UTC)

    Jul

    29

    Guest Speaker: Jason Haddix – Mastering AI Security: Real-World Scenarios and Cutting-Edge Methodologies

    Tue 7/295:00 PM—6:00 PM (UTC)

    Jul

    29

    Live Prompt Hacking & Project Review

    Tue 7/296:00 PM—7:00 PM (UTC)

    Jul

    30

    Guest Speaker: Nina and Bashir (Richard Lundeen's team) from PyRIT

    Wed 7/305:00 PM—6:00 PM (UTC)

    Guest Speaker: Johann Rehberger - SpAIware & More: Advanced Prompt Injection Exp

    1 item

    Guest Speaker: Akshat Parikh - Adversarial Testing in the AI Era

    1 item

    Automated AI Red-Teaming

    2 items

    📚 Resources and Recommended Reading

    1 item

    💬 Week 4: Retrospective

    1 item

Post-course

    Aug

    4

    Final Review Session with Sander

    Mon 8/45:00 PM—6:00 PM (UTC)

    Aug

    8

    Thank You/Networking/Certification Celebration

    Fri 8/85:00 PM—5:30 PM (UTC)

    AI/ML Red-Teaming Certification Exam

    1 item

    Certificate of Completion

    1 item

4.4 (66 ratings)

What students are saying

What AI leaders are saying

        Hands-on teaching and learning. Good intros and opportunity to work through assignments.
Andy Purdy

Andy Purdy

CISO of Huawei
        The folks at https://learnprompting.org do a great job!
Logan Kilpatrick

Logan Kilpatrick

Head of Developer Relations at OpenAI
        Thank you for today’s workshop! We had 1,696 attendees— This is a very high number for our internal community, second only to our keynote at last December’s big conference
Alex Blanton

Alex Blanton

AI/ML Community Lead (Office of CTO) at Microsoft
How to Secure Your AI System

How to Secure Your AI System

Prompt Hacking Techniques

Learn about direct and indirect techniques for manipulating AI chatbots into performing unintended actions.

Defenses for AI Systems

Explore strategies for preventing harmful attacks from malicious actors.

Hands-on Demos

Apply prompt injection techniques from this lesson to deceive chatbots using HackAPrompt.com

Get the free recording

A pattern of wavy dots

Join an upcoming cohort

AI Red Teaming and AI Security Masterclass

Cohort 4

$1,495

Dates

July 7—Aug 2, 2025

Payment Deadline

July 7, 2025

Don't miss out! Enrollment closes in 3 days

Cohort 5

$1,495

Dates

Aug 25—Sep 20, 2025

Payment Deadline

Aug 25, 2025

Cohort 6

$1,495

Dates

Oct 6—Nov 1, 2025

Payment Deadline

Oct 6, 2025
Get reimbursed

Course Schedule

4-6 hours per week

  • Mondays - Live Class Sessions

    1:00pm - 2:00pm EST

    8 Modules covered over 4 Live sessions. Each session is exercise & participation heavy with plenty of time for Q&A with Sander (& Guest Speakers).

  • Live Prompt Hacking Sessions

    You'll engage in live prompt hacking sessions that will go over weekly projects by hacking state-of-the-art AIs. These sessions will also introduce students to the latest automatic red-teaming tools like PyRIT and Garak.

  • Weekly projects

    2 hours per week

    This course is hands-on! You’ll work on structured projects that apply red-teaming techniques to real-world scenarios and participate in guided sessions within the HackAPrompt Playground.

  • On-Demand Access to Learn Prompting Plus

    20 hours+ of On-demand Course Content

    Gain On-Demand Access to the AI/ML Red-Teaming Masterclass & Learn Prompting Plus, which includes over 20 hours of courses on ChatGPT (created in partnership with OpenAI), Prompt Engineering, Generative AI, AI Image-Creation, Prompt Hacking, & more.

Meet your instructor

Sander Schulhoff

Sander Schulhoff

CEO, Learn Prompting (3M+ Learners), HackAPrompt, & Award-winning AI Researcher


Sander Schulhoff is the Founder of Learn Prompting, the first prompt engineering guide released on the internet (even before ChatGPT launched), and an award-winning AI researcher from the University of Maryland who has authored research with OpenAI, Scale AI, Hugging Face, Stanford, US Federal Reserve, and Microsoft. He is also the co-instructor of "ChatGPT for Everyone," a course created in partnership with OpenAI.


He is the organizer of HackAPrompt, the largest AI Safety competition ever run, in partnership with OpenAI, ScaleAI, and Hugging Face. The competition attracted over 3,000 AI Hackers from around the world and collected 600,000 malicious prompts, making it the largest prompt injection dataset ever collected. It was also the largest competition ever held on the AICrowd platform, surpassing Amazon’s record for most competitors by 50%. His paper from this competition was awarded Best Theme Paper at EMNLP, the leading NLP conference, selected from over 20,000 papers submitted by PhD students and professors worldwide. OpenAI cited this paper in their Instruction Hierarchy, and used the dataset to make their models 30-50% safer from prompt injections (#1 security risk in LLMs).


In his recent research paper, "The Prompt Report," Sander Schulhoff led a team of researchers from OpenAI, Microsoft, Google, and Stanford University to conduct a comprehensive 76-page survey of over 1,500 prompting papers, analyzing the effectiveness of various prompting techniques, Agents, and Generative AI.


Schulhoff has spoken and led workshops at Microsoft, OpenAI, and Stanford University, and his Generative AI courses have trained over 3 million people to date, including thousands at Deloitte, Meta, Microsoft, and more.

Frequently Asked Questions

Still reading?

Enter your email to get periodic updates about this course

$1,495

4.4 (66)

·

3 days left to enroll