Improve reliability of your AI applications

Hosted by Shreya Rajpal

Share with your network

What you'll learn

Why validation of AI applications is needed

LLM applications require fundamentally new abstractions that systematically validate their performance

Survey of AI reliability techniques

Explore common techniques used to improve reliability for AI applications and evaluate which technique to adopt

How to build robust AI validators

Hands-on demo of building an AI validator for a specific use case

Why this topic matters

Most LLM applications today showcase amazing capabilities, but are limited with their impact because of their lack of reliability. This lesson focuses exclusively on the substantial problem of making AI applications reliable, digging deep into a survey of existing techniques for AI reliability and goes into the use case of building an AI validator.

You'll learn from

Shreya Rajpal

CEO and Cofounder, Guardrails AI

Shreya Rajpal is the CEO of Guardrails AI, an open source platform developed to ensure increased safety, reliability and robustness of large language models in real-world applications. Her expertise spans a decade in the field of machine learning and AI. Most recently, she was the founding engineer at Predibase, where she led the ML infrastructure team. In earlier roles, she was part of the cross-functional ML team within Apple's Special Projects Group and developed computer vision models for autonomous driving perception systems at Drive.ai.

Previously at

Apple
University of Illinois
IIT Delhi

Watch the recording for free

By continuing, you agree to Maven's Terms and Privacy Policy.

© 2024 Maven Learning, Inc.