AI Researcher & Learn Prompting Founder
.png&w=768&q=75)
The #1 AI Security Course. Created by Learn Prompting, the team behind the first Prompt Engineering Guide.
Prompt injection attacks are the #1 security vulnerability in enterprise AI systems. Attackers can manipulate chatbots to steal sensitive data, access internal documents, and bypass safety guardrails. Many organizations do not realize how exposed their AI systems are or how easily these attacks can succeed.
Teams building AI products often lack a reliable way to test how their systems behave under real pressure. Traditional security reviews do not reveal AI-specific weaknesses, and features are frequently shipped without understanding how attackers can influence model behavior. This course gives you a practical way to evaluate these risks by working directly with live systems. You will uncover real vulnerabilities, test real attacks, and learn how to apply protections that hold up in production.
The training includes hands-on projects that prepare you for the AIRTP+ certification exam. Thousands of professionals from Microsoft, Google, Capital One, IBM, ServiceNow, and Walmart have trained with this approach and now use these methods to secure AI systems worldwide.
Learn how to uncover AI vulnerabilities, run real attacks, and apply defenses that secure systems in production.
Learn how prompt injections, jailbreaks, and adversarial inputs actually succeed
Study real model behavior to identify weak points in prompts, context, and integrations
Recognize where traditional security assumptions break down in AI applications
Run hands-on attacks in a controlled environment to expose real vulnerabilities
Trace how manipulations happen by analyzing system outputs and behavior
Use red teaming workflows to validate risks and stress-test AI features
Apply validation, filtering, and safety controls that strengthen system behavior
Test defenses under realistic conditions to confirm they prevent exploitation
Build repeatable evaluation routines that surface failures early
Investigate live systems for vulnerabilities and test exploitation paths
Repair insecure flows and measure the impact of your fixes
Complete projects that prepare you for the AIRTP+ exam through practical, exam-aligned work
.png&w=384&q=75)
AI researcher. Founder of Learn Prompting. AI security expert.
Security, trust, and safety professionals who need practical ways to evaluate AI risks, run red team tests, and improve system reliability.
PMs and product leads who want to understand AI vulnerabilities and make informed decisions about safety, risk, and system behavior.
Engineers and technical ICs who want practical skills to test AI behavior, uncover weaknesses, and build more secure AI features.
.png&w=1536&q=75)
Live sessions
Learn directly from Sander Schulhoff in a real-time, interactive format.
Learn Prompting+ Access
Complimentary access to over 15 comprehensive courses on Prompt Engineering, Prompt Hacking, and other related topics.
AIRT+ Certification
Receive a voucher to take the AI Red Teaming Professional Certificate Exam for free!
Lifetime access
Go back to course content and recordings whenever you need to.
Community of peers
Stay accountable and share insights with like-minded professionals.
Certificate of completion
Share your new skills with your employer or on LinkedIn.
Maven Guarantee
This course is backed by the Maven Guarantee. Students are eligible for a full refund up until the halfway point of the course.
17 live sessions • 31 lessons • 5 projects
Jan
29
Jan
26
Jan
27
Feb
1
Jan
31
Feb
7
Feb
8
Feb
7
Feb
2

Learn about direct and indirect techniques for manipulating AI chatbots into performing unintended actions.
Explore strategies for preventing harmful attacks from malicious actors.
Apply prompt injection techniques from this lesson to deceive chatbots using HackAPrompt.com
Live sessions
1 hr / week
Thu, Jan 29
6:00 PM—7:00 PM (UTC)
Mon, Jan 26
5:00 PM—6:00 PM (UTC)
Tue, Jan 27
6:00 PM—7:00 PM (UTC)
Projects
1 hr / week
Async content
1 hr / week

Andy Purdy

Logan Kilpatrick

Alex Blanton
$1,495
USD