AI Governance & Cybersecurity Researcher
Ran the 1st AI Red Teaming CTF w/ OpenAI

.png&w=768&q=75)
4 people enrolled last week.
The #1 AI Red Teaming Course. Taught by HackAPrompt, creators of the 1st AI Red Teaming Competition
Prompt injection attacks are the #1 security vulnerability in AI systems. HackAPrompt's research helped OpenAI increase their model's resistance to prompt injections by up to 46%. Our recent research with OpenAI, Anthropic, and Google DeepMind found humans outperform automated AI Red Teaming.
That's why companies need trained AI Red Teamers, and why we built this course.
Using HackAPrompt, you'll gain hands-on experience identifying prompt injections, jailbreaks, and adversarial attacks – learning to break AI systems and secure them.
Plus, access recorded hacking sessions with top AI Red Teamers who share their favorite techniques, including:
Pliny the Prompter – World's most renowned AI jailbreaker
Jason Haddix – ex-CISO Ubisoft
Johann Rehberger – Built Microsoft Azure Red Teams
Richard Lundeen – Microsoft AI Red Team Lead
Valen – 1st in Anthropic's Red Teaming competition
& more!
The training prepares you for our AIRTP+ certification exam, which has certified 100's from Microsoft, Google, Capital One, IBM, ServiceNow, and Walmart.
For enterprise training: team@learnprompting.org
Learn how to uncover AI vulnerabilities, run real attacks, and apply defenses that secure systems in production.
Learn how prompt injections, jailbreaks, and adversarial inputs actually succeed
Study real model behavior to identify weak points in prompts, context, and integrations
Recognize where traditional security assumptions break down in AI applications
Run hands-on attacks in a controlled environment to expose real vulnerabilities
Trace how manipulations happen by analyzing system outputs and behavior
Use red teaming workflows to validate risks and stress-test AI features
Apply validation, filtering, and safety controls that strengthen system behavior
Test defenses under realistic conditions to confirm they prevent exploitation
Build repeatable evaluation routines that surface failures early
Investigate live systems for vulnerabilities and test exploitation paths
Repair insecure flows and measure the impact of your fixes
Complete projects that prepare you for the AIRTP+ exam through practical, exam-aligned work
Security, trust, and safety professionals who need practical ways to evaluate AI risks, run red team tests, and improve system reliability.
PMs and product leads who want to understand AI vulnerabilities and make informed decisions about safety, risk, and system behavior.
Engineers and technical ICs who want practical skills to test AI behavior, uncover weaknesses, and build more secure AI features.
Live sessions
Learn directly from Krystal Jackson & Sander Schulhoff in a real-time, interactive format.
Learn Prompting+ Access
Complimentary access to over 15 comprehensive courses on Prompt Engineering, Prompt Hacking, and other related topics.
AIRTP+ Certification
Receive a voucher to take the AI Red Teaming Professional Certificate Exam for free!
Lifetime access
Go back to course content and recordings whenever you need to.
Community of peers
Stay accountable and share insights with like-minded professionals.
Certificate of completion
Share your new skills with your employer or on LinkedIn.
Maven Guarantee
This course is backed by the Maven Guarantee. Students are eligible for a full refund through the second week of the course.
17 live sessions • 32 lessons • 5 projects
Feb
16
Feb
20
Feb
20
Feb
23
Feb
27
Feb
27

Learn about direct and indirect techniques for manipulating AI chatbots into performing unintended actions.
Explore strategies for preventing harmful attacks from malicious actors.
Apply prompt injection techniques from this lesson to deceive chatbots using HackAPrompt.com
Live sessions
1 hr / week
Mon, Feb 16
6:00 PM—7:00 PM (UTC)
Fri, Feb 20
6:00 PM—7:00 PM (UTC)
Fri, Feb 20
7:00 PM—8:00 PM (UTC)
Projects
1 hr / week
Async content
1 hr / week

Andy Purdy

Logan Kilpatrick

Alex Blanton
$1,495
USD