AI Systems Under Pressure: Red-Team Before You Ship

Hosted by Sander Schulhoff

Fri, Jan 23, 2026

5:00 PM UTC (1 hour)

Virtual (Zoom)

Free to join

133 students

Invite your network

Go deeper with a course

Featured in Lenny’s List
Building Agentic AI Applications with a Problem-First Approach
Aishwarya Naresh Reganti and Kiriti Badam
View syllabus

What you'll learn

Define the Real AI Problem

Learn how unclear problem definitions introduce risk in AI systems before prompts or models are involved.

See How AI Systems Break Under Pressure

Examine a real AI workflow to understand how system behavior and problem mismatches create exploitable failures.

Test Assumptions Before You Ship

Learn how a red-teaming mindset helps validate problem framing and surface failures earlier.

Why this topic matters

When AI systems fail, the cause often traces back to early product decisions and untested assumptions about behavior. When key questions go unanswered, systems behave in unexpected ways as prompts, context, and workflows interact. Red-teaming surfaces those failures early by testing system behavior under real-world conditions before shipping.

You'll learn from

Sander Schulhoff

AI researcher and Founder of Learn Prompting

AI researcher and Founder of Learn Prompting. Sander specializes in AI security, red teaming, and system reliability. He ran HackAPrompt, the world’s first AI red teaming competition backed by OpenAI, and has trained teams at Microsoft, OpenAI, Stanford, and Google.

Sander is known for translating complex AI failure modes into clear, practical insights for builders, PMs, and security teams.

Sign up to join this lesson

By continuing, you agree to Maven's Terms and Privacy Policy.