Featured in

Lenny’s List

AI Red Teaming and AI Security Masterclass

Sander Schulhoff

AI Researcher & Learn Prompting Founder

Test, break, and secure AI systems using proven red teaming methods.

The #1 AI Security Course. Created by Learn Prompting, the team behind the first Prompt Engineering Guide.

Prompt injection attacks are the #1 security vulnerability in enterprise AI systems. Attackers can manipulate chatbots to steal sensitive data, access internal documents, and bypass safety guardrails. Many organizations do not realize how exposed their AI systems are or how easily these attacks can succeed.

Teams building AI products often lack a reliable way to test how their systems behave under real pressure. Traditional security reviews do not reveal AI-specific weaknesses, and features are frequently shipped without understanding how attackers can influence model behavior. This course gives you a practical way to evaluate these risks by working directly with live systems. You will uncover real vulnerabilities, test real attacks, and learn how to apply protections that hold up in production.

The training includes hands-on projects that prepare you for the AIRTP+ certification exam. Thousands of professionals from Microsoft, Google, Capital One, IBM, ServiceNow, and Walmart have trained with this approach and now use these methods to secure AI systems worldwide.

What you’ll learn

Learn how to uncover AI vulnerabilities, run real attacks, and apply defenses that secure systems in production.

  • Learn how prompt injections, jailbreaks, and adversarial inputs actually succeed

  • Study real model behavior to identify weak points in prompts, context, and integrations

  • Recognize where traditional security assumptions break down in AI applications

  • Run hands-on attacks in a controlled environment to expose real vulnerabilities

  • Trace how manipulations happen by analyzing system outputs and behavior

  • Use red teaming workflows to validate risks and stress-test AI features

  • Apply validation, filtering, and safety controls that strengthen system behavior

  • Test defenses under realistic conditions to confirm they prevent exploitation

  • Build repeatable evaluation routines that surface failures early

  • Investigate live systems for vulnerabilities and test exploitation paths

  • Repair insecure flows and measure the impact of your fixes

  • Complete projects that prepare you for the AIRTP+ exam through practical, exam-aligned work

Learn directly from Sander

Sander Schulhoff

Sander Schulhoff

AI researcher. Founder of Learn Prompting. AI security expert.

Who this course is for

  • Security, trust, and safety professionals who need practical ways to evaluate AI risks, run red team tests, and improve system reliability.

  • PMs and product leads who want to understand AI vulnerabilities and make informed decisions about safety, risk, and system behavior.

  • Engineers and technical ICs who want practical skills to test AI behavior, uncover weaknesses, and build more secure AI features.

What's included

Sander Schulhoff

Live sessions

Learn directly from Sander Schulhoff in a real-time, interactive format.

Learn Prompting+ Access

Complimentary access to over 15 comprehensive courses on Prompt Engineering, Prompt Hacking, and other related topics.

AIRT+ Certification

Receive a voucher to take the AI Red Teaming Professional Certificate Exam for free!

Lifetime access

Go back to course content and recordings whenever you need to.

Community of peers

Stay accountable and share insights with like-minded professionals.

Certificate of completion

Share your new skills with your employer or on LinkedIn.

Maven Guarantee

This course is backed by the Maven Guarantee. Students are eligible for a full refund up until the halfway point of the course.

Course syllabus

17 live sessions • 31 lessons • 5 projects

Week 1

Jan 26—Feb 1

    Jan

    29

    Live Session 1: Sander Schulhoff - Introduction to AI Red Teaming and Classical Security

    Thu 1/296:00 PM—7:00 PM (UTC)

    Guest Speaker: Pliny the Prompter - Jailbreaking Every AI Model

    1 item

    Introduction to Prompt Hacking

    2 items

    Introduction to GenAI Security and Harms

    2 items

    Jan

    26

    Guest Speaker: Joseph Thacker - AI App Attacks

    Mon 1/265:00 PM—6:00 PM (UTC)

    Jan

    27

    Guest Speaker: Valen Tagliabue - Nudge, Trick, Break: Hacking AI by Thinking Like It

    Tue 1/276:00 PM—7:00 PM (UTC)

    👾 Project Kickoff: Hack HackAPrompt (Intro Track)

    1 item

    Feb

    1

    Live Prompt Hacking & Project Review

    Sun 2/16:00 PM—7:00 PM (UTC)

    Jan

    31

    Office Hours

    Sat 1/317:00 PM—8:00 PM (UTC)

    📚 Resources and Recommended Reading

    1 item

    ⭐ Learning Outcomes

    1 item

Week 2

Feb 2—Feb 8

    Guest Speaker: Donato Capitella - Hacking LLM Applications: Tales and Techniques

    1 item

    Comprehensive Guide to Prompt Hacking Techniques

    2 items

    Harms in AI-Red Teaming

    4 items

    👾 Project: Hack HackAPrompt 1.0

    1 item

    Feb

    7

    Live Session 2: Sander Schulhoff - Ignore Your Instructions and HackAPrompt

    Sat 2/76:00 PM—7:00 PM (UTC)

    Feb

    8

    Live Prompt Hacking and Office Hours

    Sun 2/87:00 PM—8:00 PM (UTC)

    Feb

    7

    Office Hours

    Sat 2/77:00 PM—8:00 PM (UTC)

    Feb

    2

    Guest Speaker: Leonard Tang, CEO of Haize Labs - Frontiers of Red-Teaming

    Mon 2/26:00 PM—7:00 PM (UTC)

    📚 Resources and Recommended Reading

    1 item

    ⭐ Learning Outcomes

    1 item

Free resource

How to Secure Your AI System cover image

How to Secure Your AI System

Prompt Hacking Techniques

Learn about direct and indirect techniques for manipulating AI chatbots into performing unintended actions.

Defenses for AI Systems

Explore strategies for preventing harmful attacks from malicious actors.

Hands-on Demos

Apply prompt injection techniques from this lesson to deceive chatbots using HackAPrompt.com

Schedule

Live sessions

1 hr / week

    • Thu, Jan 29

      6:00 PM—7:00 PM (UTC)

    • Mon, Jan 26

      5:00 PM—6:00 PM (UTC)

    • Tue, Jan 27

      6:00 PM—7:00 PM (UTC)

Projects

1 hr / week

Async content

1 hr / week

Testimonials

  • Hands-on teaching and learning. Good intros and opportunity to work through assignments.
    Testimonial author image

    Andy Purdy

    CISO of Huawei
  • The folks at https://learnprompting.org do a great job!
    Testimonial author image

    Logan Kilpatrick

    Head of Developer Relations at OpenAI
  • Thank you for today’s workshop! We had 1,696 attendees— This is a very high number for our internal community, second only to our keynote at last December’s big conference
    Testimonial author image

    Alex Blanton

    AI/ML Community Lead (Office of CTO) at Microsoft

Frequently asked questions

$1,495

USD

·
Jan 26Feb 21
Enroll