How to Secure Your AI System

Hosted by Sander Schulhoff

Tue, Jun 24, 2025

8:00 PM UTC (45 minutes)

Virtual (Zoom)

Free to join

Invite your network

Go deeper with a course

Save 20% til Sunday

AI Red Teaming and AI Security Masterclass
Sander Schulhoff
View syllabus

What you'll learn

Prompt Injection Techniques

Learn about direct and indirect techniques for manipulating AI chatbots into performing unintended actions.

Defenses for AI Systems

Explore strategies for preventing harmful attacks from malicious actors.

Hands-on Demos

Apply prompt injection techniques from this lesson to deceive chatbots using HackAPrompt.com

Why this topic matters

As artificial intelligence becomes increasingly embedded in daily and professional life, it is important to understand the limitations of such technology. In this lesson, you'll learn about how AI models can be manipulated through carefully crafted prompts and explore defense techniques for securing AI systems.

You'll learn from

Sander Schulhoff

Co-founder, Learn Prompting & HackAPrompt

Sander Schulhoff created the first Prompt Engineering guide on the internet, two months before ChatGPT was released, which has taught 3 million people how to prompt ChatGPT. He also partnered with OpenAI to run the first AI Red Teaming competition, HackAPrompt, which was 2x larger than the White House's subsequent AI Red Teaming competition. Today, HackAPrompt partners with the Frontier AI labs to produce research that makes their models more secure.


Sander's background is in Natural Language Processing and deep reinforcement learning. He recently led the team behind The Prompt Report, the most comprehensive study of prompt engineering ever done. This 76-page survey, co-authored with OpenAI, Microsoft, Google, Princeton, Stanford, and other leading institutions, analyzed 1,500+ academic papers and covered 200+ prompting techniques.

Learn directly from Sander Schulhoff

By continuing, you agree to Maven's Terms and Privacy Policy.

© 2025 Maven Learning, Inc.