Class is in session

AI Security in Action: Attacking & Defending AI Application & Services

6 Weeks

·

Cohort-based Course

Master AI & LLM security through CTF-style cohort, tackling real-world attacks, defenses, adversarial threats, and Responsible AI principles

Previously at

JPMorgan Chase & Co.
Amazon Web Services
Netskope
Broadcom Inc.

Course overview

Live, hands-on, CTF-style

Can prompt injections lead to complete infrastructure takeovers? Could AI applications be exploited to compromise backend services? Can data poisoning in AI copilots impact a company's stock? Can jailbreaks create false crisis alerts in security systems? This immersive, CTF-styled cohort in GenAI and LLM security dives into these pressing questions.

Engage in realistic attack and defense scenarios focused on real-world threats, from prompt injection and remote code execution to backend compromise. Tackle hands-on challenges with actual AI applications to understand vulnerabilities and develop robust defenses. You’ll learn how to create a comprehensive security pipeline, mastering AI red and blue team strategies, building resilient defenses for LLMs, and handling incident response for AI-based threats. Additionally, implement a Responsible AI (RAI) program to enforce ethical AI standards across enterprise services, fortifying your organization’s AI security foundation.


 

By the end of this training, you will be able to:


- Exploit vulnerabilities in live AI applications to achieve code and command execution, uncovering scenarios such as cross-site scripting, SQL injection, insecure agent designs, and remote code execution for infrastructure takeover.


- Conduct GenAI red-teaming using adversary simulation, OWASP LLM Top 10, and MITRE ATLAS frameworks, while applying AI security and ethical principles in real-world scenarios.


- Execute and defend against adversarial attacks, including prompt injection, data poisoning, model inversion, and agentic attacks.


- Perform advanced AI red and blue teaming through multi-agent auto-prompting attacks, implementing a 3-way autonomous system consisting of attack, defend and judge models.


- Develop LLM security scanners to detect and protect against injections, jailbreaks, manipulations, and risky behaviors, as well as defending LLMs with LLMs.


- Build and deploy enterprise-grade LLM defenses, including custom guardrails for input/output protection, security bench-marking, and penetration testing of LLM agents.


- Utilize open-source tools like HuggingFace, OpenAI, NeMo, Streamlit, and Garak to build custom GenAI tools and enhance your GenAI development skills.


- Establish a comprehensive AI SecOps process to secure the supply chain from adversarial attacks and create a robust threat model for enterprise applications.


- Implement an incident response and risk management plan for enterprises developing or using GenAI services.

Who is this course for

01

Security professionals seeking to update their skills for the AI era.

02

Red & Blue team members.

03

AI Developers & Engineers interested in the security aspects of AI and LLM models.

04

AI Safety professionals and analysts working on regulations, controls and policies related to AI.


05

Product Managers & Founders looking to strengthen their PoVs and models with security best practices.


What you’ll get out of this course

Access to Practical, hands-on labs, simulating real attacks on AI Applications & Agentic systems giving you a real-world experience.

You will have one year access to a live playground consisting of real-world AI Applications to sharpen your security skills.

Gain deep understanding of attacks on AI stack including prompt injections, jailbreaks, code executions, infrastructure takeovers.

The live playground will consist of vulnerable AI applications and services. You will be navigating through the applications to find vulnerabilities and security issues, capturing the hidden flags to solve the exercises.

Learn to build AI application defenses with security scanners, guardrails, and pipelines to protect enterprise AI systems.

Understand how to build enterprise grade defense for AI services, using open source tools.

Master GenAI red & blue teaming via OWASP LLM Top 10 and MITRE ATLAS, tackling advanced adversary simulations and CTF challenges.

Learn to implement industry standards and frameworks to define a security strategy and fortifying your defenses.

Implement Responsible AI programs focusing on ethics, bias detection, and risk management for secure GenAI applications.

Build a deeper understanding of Responsible AI use-cases, testing, bench-marking and more.

Design and deploy SecOps processes to secure GenAI supply chains and perform comprehensive threat modeling for enterprise AI systems.

Combine all the learning to build a SecOps process that is scalable and meets the demands of the industry.

Develop custom AI tools using open-source platforms like HuggingFace, OpenAI, NeMo, and Streamlit to enhance GenAI capabilities.

This course includes

Interactive live sessions

Lifetime access to course materials

7 in-depth lessons

Direct access to instructor

Projects to apply learnings

Guided feedback & reflection

Private community of peers

Course certificate upon completion

Maven Satisfaction Guarantee

This course is backed by Maven’s guarantee. You can receive a full refund within 14 days after the course ends, provided you meet the completion criteria in our refund policy.

Course syllabus

Week 1

Feb 10—Feb 16

    Introduction to AI Systems, Architecture and Application Designs

    1 item

    Elements of AI Security

    0 items

Week 2

Feb 17—Feb 23

    Adversarial LLM Attacks(Red Teaming)

    1 item

    Advance Red Teaming at scale

    0 items

Week 3

Feb 24—Mar 2

Week 4

Mar 3—Mar 9

    Attacking AI Agents

    0 items

    Lateral Movement and Privilege escalantion by abusing AI Agents & applications

    0 items

Week 5

Mar 10—Mar 16

    Building Enterprise grade LLM defenses

    0 items

    Benchmarking for Safety & Security Of AI applications and services

    0 items

Week 6

Mar 17

Bonus

    Introduction to AI and Security use-cases

    2 items

    Attacking AI Applications & Services

    1 item

    Advance Attacks on Agentic Systems and Applications

    2 items

    Jailbreaks & Responsible AI

    0 items

    Building Defense Guardrails

    0 items

What people are saying

        very good structured! relatively easy to follow, even if you have no background in AI, good comprehensive overview gained after this training!
Yogeshwar Agnihotri

Yogeshwar Agnihotri

Security Consultant
        I liked it. State of the Art information and very relevant. Also it was the perfect mixture out of organizational and technical topics.
Siegfried Hollerer

Siegfried Hollerer

Author title
        Loved the material/content and CTF/Labs! the person that was presenting was cool too lol thanks again you were awesome! really enjoyed the class!
Sean M.

Sean M.

Meet your instructor

Abhinav Singh

Abhinav Singh

International Trainer & Speaker, Security Researcher & Consultant.

Abhinav Singh is an esteemed cybersecurity leader & researcher with over 15 years of experience across technology leaders, financial institutions, and as an independent trainer and consultant. Author of "Metasploit Penetration Testing Cookbook" and "Instant Wireshark Starter," his contributions span patents, open-source tools, and numerous publications. Recognized in security portals and digital platforms, Abhinav is a sought-after speaker & trainer at international conferences like Black Hat, RSA, DEF CON, BruCon and many more, where he shares his deep industry insights and innovative approaches in cybersecurity. He also leads multiple AI security groups at CSA, responsible for coming up with cutting-edge whitepapers and industry reports around safety and security of GenAI.

Prior to running the cohort on Maven, this AI Security course has been delivered at International Cybersecurity conferences and multi-national organizations and has received positive feedback all around.

A pattern of wavy dots

Be the first to know about upcoming cohorts

AI Security in Action: Attacking & Defending AI Application & Services

Course schedule

4-6 hours per week

  • Tuesdays & Thursdays

    1:00pm - 2:00pm EST

    If your events are recurring and at the same time, it might be easiest to use a single line item to communicate your course schedule to students

  • May 7, 2022

    Feel free to type out dates as your title as a way to communicate information about specific live sessions or other events.

  • Weekly projects

    2 hours per week

    Schedule items can also be used to convey commitments outside of specific time slots (like weekly projects or daily office hours).

Learning is better with cohorts

Learning is better with cohorts

Active hands-on learning

This course builds on live workshops and hands-on projects

Interactive and project-based

You’ll be interacting with other learners through breakout rooms and project teams

Learn with a cohort of peers

Join a community of like-minded people who want to learn and grow alongside you

Frequently Asked Questions

A pattern of wavy dots

Be the first to know about upcoming cohorts

AI Security in Action: Attacking & Defending AI Application & Services