Class is in session
6 Weeks
·Cohort-based Course
Master AI & LLM security through CTF-style cohort, tackling real-world attacks, defenses, adversarial threats, and Responsible AI principles
Class is in session
6 Weeks
·Cohort-based Course
Master AI & LLM security through CTF-style cohort, tackling real-world attacks, defenses, adversarial threats, and Responsible AI principles
Previously at
Course overview
Can prompt injections lead to complete infrastructure takeovers? Could AI applications be exploited to compromise backend services? Can data poisoning in AI copilots impact a company's stock? Can jailbreaks create false crisis alerts in security systems? This immersive, CTF-styled cohort in GenAI and LLM security dives into these pressing questions.
Engage in realistic attack and defense scenarios focused on real-world threats, from prompt injection and remote code execution to backend compromise. Tackle hands-on challenges with actual AI applications to understand vulnerabilities and develop robust defenses. You’ll learn how to create a comprehensive security pipeline, mastering AI red and blue team strategies, building resilient defenses for LLMs, and handling incident response for AI-based threats. Additionally, implement a Responsible AI (RAI) program to enforce ethical AI standards across enterprise services, fortifying your organization’s AI security foundation.
By the end of this training, you will be able to:
- Exploit vulnerabilities in live AI applications to achieve code and command execution, uncovering scenarios such as cross-site scripting, SQL injection, insecure agent designs, and remote code execution for infrastructure takeover.
- Conduct GenAI red-teaming using adversary simulation, OWASP LLM Top 10, and MITRE ATLAS frameworks, while applying AI security and ethical principles in real-world scenarios.
- Execute and defend against adversarial attacks, including prompt injection, data poisoning, model inversion, and agentic attacks.
- Perform advanced AI red and blue teaming through multi-agent auto-prompting attacks, implementing a 3-way autonomous system consisting of attack, defend and judge models.
- Develop LLM security scanners to detect and protect against injections, jailbreaks, manipulations, and risky behaviors, as well as defending LLMs with LLMs.
- Build and deploy enterprise-grade LLM defenses, including custom guardrails for input/output protection, security bench-marking, and penetration testing of LLM agents.
- Utilize open-source tools like HuggingFace, OpenAI, NeMo, Streamlit, and Garak to build custom GenAI tools and enhance your GenAI development skills.
- Establish a comprehensive AI SecOps process to secure the supply chain from adversarial attacks and create a robust threat model for enterprise applications.
- Implement an incident response and risk management plan for enterprises developing or using GenAI services.
01
Security professionals seeking to update their skills for the AI era.
02
Red & Blue team members.
03
AI Developers & Engineers interested in the security aspects of AI and LLM models.
04
AI Safety professionals and analysts working on regulations, controls and policies related to AI.
05
Product Managers & Founders looking to strengthen their PoVs and models with security best practices.
Access to Practical, hands-on labs, simulating real attacks on AI Applications & Agentic systems giving you a real-world experience.
You will have one year access to a live playground consisting of real-world AI Applications to sharpen your security skills.
Gain deep understanding of attacks on AI stack including prompt injections, jailbreaks, code executions, infrastructure takeovers.
The live playground will consist of vulnerable AI applications and services. You will be navigating through the applications to find vulnerabilities and security issues, capturing the hidden flags to solve the exercises.
Learn to build AI application defenses with security scanners, guardrails, and pipelines to protect enterprise AI systems.
Understand how to build enterprise grade defense for AI services, using open source tools.
Master GenAI red & blue teaming via OWASP LLM Top 10 and MITRE ATLAS, tackling advanced adversary simulations and CTF challenges.
Learn to implement industry standards and frameworks to define a security strategy and fortifying your defenses.
Implement Responsible AI programs focusing on ethics, bias detection, and risk management for secure GenAI applications.
Build a deeper understanding of Responsible AI use-cases, testing, bench-marking and more.
Design and deploy SecOps processes to secure GenAI supply chains and perform comprehensive threat modeling for enterprise AI systems.
Combine all the learning to build a SecOps process that is scalable and meets the demands of the industry.
Develop custom AI tools using open-source platforms like HuggingFace, OpenAI, NeMo, and Streamlit to enhance GenAI capabilities.
Interactive live sessions
Lifetime access to course materials
7 in-depth lessons
Direct access to instructor
Projects to apply learnings
Guided feedback & reflection
Private community of peers
Course certificate upon completion
Maven Satisfaction Guarantee
This course is backed by Maven’s guarantee. You can receive a full refund within 14 days after the course ends, provided you meet the completion criteria in our refund policy.
AI Security in Action: Attacking & Defending AI Application & Services
Yogeshwar Agnihotri
Siegfried Hollerer
Sean M.
International Trainer & Speaker, Security Researcher & Consultant.
Abhinav Singh is an esteemed cybersecurity leader & researcher with over 15 years of experience across technology leaders, financial institutions, and as an independent trainer and consultant. Author of "Metasploit Penetration Testing Cookbook" and "Instant Wireshark Starter," his contributions span patents, open-source tools, and numerous publications. Recognized in security portals and digital platforms, Abhinav is a sought-after speaker & trainer at international conferences like Black Hat, RSA, DEF CON, BruCon and many more, where he shares his deep industry insights and innovative approaches in cybersecurity. He also leads multiple AI security groups at CSA, responsible for coming up with cutting-edge whitepapers and industry reports around safety and security of GenAI.
Prior to running the cohort on Maven, this AI Security course has been delivered at International Cybersecurity conferences and multi-national organizations and has received positive feedback all around.
Be the first to know about upcoming cohorts
4-6 hours per week
Tuesdays & Thursdays
1:00pm - 2:00pm EST
If your events are recurring and at the same time, it might be easiest to use a single line item to communicate your course schedule to students
May 7, 2022
Feel free to type out dates as your title as a way to communicate information about specific live sessions or other events.
Weekly projects
2 hours per week
Schedule items can also be used to convey commitments outside of specific time slots (like weekly projects or daily office hours).
Active hands-on learning
This course builds on live workshops and hands-on projects
Interactive and project-based
You’ll be interacting with other learners through breakout rooms and project teams
Learn with a cohort of peers
Join a community of like-minded people who want to learn and grow alongside you
Be the first to know about upcoming cohorts