AI Security in Action: Hands-on cohort on Attacking & Defending AI apps & Agents

4 Days

·

Cohort-based Course

Master AI & LLM security through CTF-style cohort, tackling real-world attacks, defenses, adversarial threats, and Responsible AI principles

This course is popular

5 people enrolled last week.

Featured at intl. Cybersecurity events:

Black Hat
Open Web Application Security Project
RSA
BSides
DEF CON Groups VR

Course overview

Live, hands-on, CTF-style

Can prompt injections lead to complete infrastructure takeovers? Could AI applications be exploited to compromise backend services? Can data poisoning in AI copilots impact a company's stock? Can jailbreaks create false crisis alerts in security systems? This immersive, CTF-styled cohort in GenAI and LLM security dives into these pressing questions.

Engage in realistic attack and defense scenarios focused on real-world threats, from prompt injection and remote code execution to backend compromise. Tackle hands-on challenges with actual AI applications to understand vulnerabilities and develop robust defenses. You’ll learn how to create a comprehensive security pipeline, mastering AI red and blue team strategies, building resilient defenses for LLMs, and handling incident response for AI-based threats. Additionally, implement a Responsible AI (RAI) program to enforce ethical AI standards across enterprise services, fortifying your organization’s AI security foundation.


 

By the end of this cohort, you will be able to:


- Exploit vulnerabilities in live AI applications to achieve code and command execution, uncovering scenarios such as prompt injections, exploiting agent designs, and remote code execution for infrastructure takeover.


- Conduct AI red-teaming using adversary simulation, OWASP LLM Top 10, and MITRE ATLAS frameworks.


- Perform advanced AI red and blue teaming through multi-agent auto-prompting attacks, implementing a 3-way autonomous system consisting of attack, defend and judge models.


- Build and deploy enterprise-grade LLM defenses, including custom guardrails for input/output protection, security bench-marking, and penetration testing of LLM agents.


- Perform threat modeling on Agentic applications and services to discover security & design weakness.


- Establish a comprehensive AI SecOps process to secure the supply chain from adversarial attacks and create a robust threat model for enterprise applications.

Who is this course for

01

CISOs & Security practitioners looking to implement AI security best practices.

02

Security professionals seeking to update their skills for the AI era.

03

Red & Blue team members.

04

AI Developers & Engineers interested in the security aspects of AI and LLM models.

05

AI Safety professionals and analysts working on regulations, controls and policies related to AI.


06

Product Managers & Founders looking to strengthen their PoVs and models with security best practices.


What you’ll get out of this course

Access to Practical, hands-on labs, simulating real attacks on AI Applications & Agentic systems giving you a real-world experience.

You will have one year access to a live playground consisting of real-world AI Applications to sharpen your security skills.

Gain deep understanding of attacks on AI stack including prompt injections, jailbreaks, code executions, infrastructure takeovers.

The live playground will consist of vulnerable AI applications and services. You will be navigating through the applications to find vulnerabilities and security issues, capturing the hidden flags to solve the exercises.

Learn to build AI application defenses with security scanners, guardrails, and pipelines to protect enterprise AI systems.

Understand how to build enterprise grade defense for AI services, using open source tools.

Master GenAI red & blue teaming via OWASP LLM Top 10 and MITRE ATLAS, tackling advanced adversary simulations and CTF challenges.

Learn to implement industry standards and frameworks to define a security strategy and fortifying your defenses.

Implement Responsible AI programs focusing on ethics, bias detection, and risk management for secure GenAI applications.

Build a deeper understanding of Responsible AI use-cases, testing, bench-marking and more.

What’s included

Abhinav Singh

Live sessions

Learn directly from Abhinav Singh in a real-time, interactive format.

Lifetime access

Go back to course content and recordings whenever you need to.

Lifetime access to Labs & CTF Platform

Enrolled students get lifetime access to an online lab platform(CTF) with many additional labs and content to master their skills.

Community of peers

Stay accountable and share insights with like-minded professionals through a dedicated Discord community for AI Security.

Certificate of completion

Share your new skills with your employer or on LinkedIn.

Maven Guarantee

This course is backed by the Maven Guarantee. Students are eligible for a full refund up until the halfway point of the course.

Course syllabus

4 live sessions • 15 lessons • 5 projects

Week 1

Sep 16—Sep 19

    Prompt Injections & threat Modeling of AI Applications

    • Sep

      16

      Session 1

      Tue 9/165:00 PM—8:00 PM (UTC)
    4 more items

    Jailbreaks & Elements of Responsible AI(RAI)

    • Sep

      17

      Session 2

      Wed 9/175:00 PM—8:00 PM (UTC)
    4 more items

    Scalable AI Red Teaming

    • Sep

      18

      Session 3

      Thu 9/185:00 PM—8:00 PM (UTC)
    4 more items

    Agentic Security

    • Sep

      19

      Session 4

      Fri 9/195:00 PM—8:00 PM (UTC)
    5 more items

Post-course

    Conclusion

    0 items

    Advance Jailbreaks

    0 items

    Attacking AI Agents

    0 items

    Lateral Movement and Privilege escalantion by abusing AI Agents & applications

    0 items

    Benchmarking for Safety & Security Of AI applications and services

    0 items

Bonus

    Introduction to AI and Security use-cases

    2 items

    Attacking AI Applications & Services

    1 item

    Jailbreaks & Responsible AI

    0 items

What people are saying

        very good structured! relatively easy to follow, even if you have no background in AI, good comprehensive overview gained after this training!
Yogeshwar Agnihotri

Yogeshwar Agnihotri

Security Consultant
        I liked it. State of the Art information and very relevant. Also it was the perfect mixture out of organizational and technical topics.
Siegfried Hollerer

Siegfried Hollerer

Author title
        Loved the material/content and CTF/Labs! the person that was presenting was cool too lol thanks again you were awesome! really enjoyed the class!
Sean M.

Sean M.

Meet your instructor

Abhinav Singh

Abhinav Singh

International Trainer & Speaker, Security Researcher & Consultant.

Abhinav Singh is an esteemed cybersecurity leader & researcher with over 15 years of experience across technology leaders, financial institutions, and as an independent trainer and consultant. Author of "Metasploit Penetration Testing Cookbook" and "Instant Wireshark Starter," his contributions span patents, open-source tools, and numerous publications. Recognized in security portals and digital platforms, Abhinav is a sought-after speaker & trainer at international conferences like Black Hat, RSA, DEF CON, BruCon and many more, where he shares his deep industry insights and innovative approaches in cybersecurity. He also leads multiple AI security groups at CSA, responsible for coming up with cutting-edge whitepapers and industry reports around safety and security of GenAI.

Prior to running the cohort on Maven, this AI Security course has been delivered at International Cybersecurity conferences and multi-national organizations and has received positive feedback all around.

A pattern of wavy dots

Join an upcoming cohort

AI Security in Action: Hands-on cohort on Attacking & Defending AI apps & Agents

Cohort 3

$595

Dates

Sep 16—19, 2025

Payment Deadline

Sep 17, 2025

Don't miss out! Enrollment closes in 8 days

Get reimbursed

Course schedule

4-6 hours per week

  • Tuesdays & Thursdays

    1:00pm - 2:00pm EST

    If your events are recurring and at the same time, it might be easiest to use a single line item to communicate your course schedule to students

  • May 7, 2022

    Feel free to type out dates as your title as a way to communicate information about specific live sessions or other events.

  • Weekly projects

    2 hours per week

    Schedule items can also be used to convey commitments outside of specific time slots (like weekly projects or daily office hours).

Learning is better with cohorts

Learning is better with cohorts

Active hands-on learning

This course builds on live workshops and hands-on projects

Interactive and project-based

You’ll be interacting with other learners through breakout rooms and project teams

Learn with a cohort of peers

Join a community of like-minded people who want to learn and grow alongside you

Frequently Asked Questions

A pattern of wavy dots

Join an upcoming cohort

AI Security in Action: Hands-on cohort on Attacking & Defending AI apps & Agents

Cohort 3

$595

Dates

Sep 16—19, 2025

Payment Deadline

Sep 17, 2025

Don't miss out! Enrollment closes in 8 days

Get reimbursed

$595

USD

8 days left to enroll