Class is in session

Enterprise RAG and Multi-Agent Applications

4.9

(22 ratings)

·

5 Weeks

·

Cohort-based Course

Build and Optimize Production-Grade RAG and LLM Applications: Master Advanced Techniques for Scalable, Secure, and Low-Latency AI Solutions

This course is popular

22 people enrolled last week.

Previously at

Google
Stanford University
Ucla
company logo
University of Minnesota

Course overview

Go Beyond Basic Frameworks: Build and Deploy Production-Grade AI Solutions

Welcome to the most technically rigorous and hands-on Large Language Model (LLM) application course available today.


This isn't just another AI course – it's your gateway to mastering the art and science of deploying production-grade LLM solutions that stand out in the real world.


As part of Maven's Top-Rated Content, this course is designed for those who have already mastered the basics of RAG, cosine similarity, vector databases, and LLMs. We'll take you to the next level, focusing on practical aspects of packaging and deploying these models in real-world production environments.


For cohort members joining this intensive learning experience, here's what you get:


- 6 weeks of in-depth content

- Weekly office hours for personalized guidance

- Real-world projects and challenging assignments

- Guest Lectures by Leading AI Professionals

- Continued support post-graduation

- Lifetime access to course materials



What You'll Master: Course Highlights


Agents: Forget CrewAI or Autogen, build your won Agents from scratch. Learn what it takes to make an Agent from the ground. Contribute to Open Source community.


Advanced RAG Solutions: Dive into enterprise-level RAG architectures and learn how to build and implement semantic caching from scratch using GCP and Redis.


LLM Hosting and Deployment: Gain insights into best practices for hosting Large Language Models (LLMs) in diverse production settings, creating inference endpoints, and deploying LLMs on serverless platforms.


Continual Pre-Training and Fine-Tuning: Explore advanced techniques for continual pre-training, fine-tuning LLMs, and mitigating catastrophic forgetting. Learn how to build a data pipeline for pre-training, apply causal language modeling, and leverage scaling laws.


Model Merging and Mixture of Experts: Master techniques for merging multiple models to enhance their collective capabilities, including the Mixture of Experts (MoE) approach. Learn to use tools like mergekit for efficient model merging.


Quantization Methods: Discover techniques to reduce model size while maintaining performance, crucial for deployment in resource-constrained environments.


Inference Speed Optimization: Learn strategies to accelerate inference speeds for real-time language processing, ensuring efficient and responsive AI systems.


Responsible AI Implementation: Explore ethical AI development using guardrails like NeMo, Colang, and Llama Guard to ensure AI systems align with responsible AI principles.


Agentic RAG and Chunking Strategies: Implement advanced semantic chunking techniques and explore AI agent frameworks like AutoGen to enhance the capabilities of RAG systems.


DSPy and Knowledge Graphs: Learn to create and utilize knowledge graphs effectively, mastering DSPy as an alternative prompting approach for structured data handling and enhanced AI interaction.


Throughout the course, we will analyze state-of-the-art AI products, reverse-engineering some through Python. As a bonus, you'll have access to experimental products being developed at Traversaal.ai, my startup, allowing you to stay at the forefront of advancements in the field.


Prerequisite: You should have coding experience of building RAG Solution. Understanding of Encoders and Decoders and some knowledge of Cloud Solutions and APIs.


If you feel the need for a more foundational course, consider checking out my other offering on LLMs: Building LLM Applications (https://maven.com/boring-bot/ml-system-design).


This course is for you if you are a:

01

Machine Learning Engineer exploring different techniques to scale LLM solutions

02

Researcher, who would like to delve in to various aspects of open-source LLMs

03

Software Engineer, looking to learn how to integrate AI into their products

What you’ll get out of this course

Advanced AI Architectures

Understand and implement complex AI architectures, including enterprise-level RAG systems and agentic RAG strategies. You will also dive deep into the Mixture of Experts (MoE) technique and other model merging strategies to enhance the capabilities of your AI systems.

Practical Skills for Deployment

From building semantic caches using GCP and Redis to deploying LLMs on serverless platforms like AWS Bedrock, you'll learn the practical skills to deploy and manage AI applications in real-world scenarios. 

Fine-Tuning Expertise

Acquire advanced techniques for fine-tuning LLMs, enabling you to adapt these models to specific tasks or domains and enhance their performance in targeted applications.

Efficient Inference Processing

Explore strategies for exploring and optimizing inference speeds, ensuring that your language models perform efficiently in real-time scenarios, a crucial skill for deploying responsive and scalable applications.

Knowledge of Responsible AI

Understand the importance of ethical AI development and learn to implement guardrails using tools like NeMo, Colang, and Llama Guard to ensure your AI systems align with responsible AI principles.




This course includes

12 interactive live sessions

Lifetime access to course materials

In-depth lessons

Direct access to instructor

Projects to apply learnings

Guided feedback & reflection

Private community of peers

Course certificate upon completion

Maven Satisfaction Guarantee

This course is backed by Maven’s guarantee. You can receive a full refund within 14 days after the course ends, provided you meet the completion criteria in our refund policy.

Course syllabus

Week 1

Oct 5—Oct 6

    Oct

    5

    Session 1: Enterprise RAG and Multi Agents

    Sat 10/54:00 PM—6:00 PM (UTC)

    Recordings from previous Talks/ Sessions

    2 items

    Enterprise RAG Solutions with Semantic Caching

    8 items

Week 2

Oct 7—Oct 13

    Oct

    8

    Office Hours

    Tue 10/87:00 PM—7:30 PM (UTC)
    Optional

    Oct

    12

    Session 2

    Sat 10/124:00 PM—6:00 PM (UTC)

    Optimizing and Deploying Large Language Models

    8 items

Week 3

Oct 14—Oct 20

    Oct

    17

    Office Hours

    Thu 10/177:00 PM—7:30 PM (UTC)
    Optional

    Oct

    19

    Session 3

    Sat 10/194:00 PM—6:00 PM (UTC)

    Quantization, API Production and Guardrails

    3 items

Week 4

Oct 21—Oct 27

    Oct

    23

    Office Hours

    Wed 10/237:00 PM—7:30 PM (UTC)
    Optional

Week 5

Oct 28—Nov 3

    Nov

    2

    Session 4

    Sat 11/24:00 PM—6:00 PM (UTC)

    DSPy and Knowledge Graphs

    4 items

Week 6

Nov 4—Nov 9

    Nov

    6

    Office Hours

    Wed 11/63:30 PM—4:30 PM (UTC)

    Nov

    9

    Session 5

    Sat 11/94:00 PM—6:00 PM (UTC)

    Semantic and Agentic RAG

    2 items

Post-course

    Nov

    30

    Demo Day

    Sat 11/305:00 PM—7:00 PM (UTC)

    Demo Day

    0 items

    Nov

    14

    Office Hours

    Thu 11/146:00 PM—6:30 PM (UTC)

    Nov

    16

    Last Class

    Sat 11/165:00 PM—7:00 PM (UTC)

    Autogen and Agents

    2 items

    Model Merging and Fine-tuning Video recordings

    2 items

4.9

(22 ratings)

What students are saying

Meet your instructor

Hamza Farooq

Hamza Farooq

I am the founder of Traversaal.ai, an LLM-based startup dedicated to creating scalable, customizable, and cost-efficient language model solutions for enterprises.


With over 15 years of experience in machine learning, my journey has spanned three continents and seven countries, covering a diverse range of industries such as tech, telecommunications, finance, and retail.


As a former Senior Research Manager at Google and Walmart Labs, I have led data science and machine learning teams, focusing on optimization, natural language processing, recommender systems, and time series forecasting.

I am also an adjunct professor at Stanford and UCLA, where I bridge the gap between academic theory and real-world AI applications.


Additionally, I frequently speak at conferences and conduct training sessions, sharing insights on large language models, deep learning, and cloud computing.

A pattern of wavy dots

Be the first to know about upcoming cohorts

Enterprise RAG and Multi-Agent Applications

Get reimbursed

Bulk purchases

Course schedule

4-6 hours per week

  • Sundays

    9:00 - 11:00am PT

    Virtual Class

  • Weekly projects

    2-3 hours per week

    Work in teams to build solutions, this requires engagement with other team members

Learning is better with cohorts

Learning is better with cohorts

Active hands-on learning

This course builds on live workshops and hands-on projects

Interactive and project-based

You’ll be interacting with other learners through breakout rooms and project teams

Learn with a cohort of peers

Join a community of like-minded people who want to learn and grow alongside you

Frequently Asked Questions

What happens if I can’t make a live session?

I work full-time, what is the expected time commitment?

What’s the refund policy?

A pattern of wavy dots

Be the first to know about upcoming cohorts

Enterprise RAG and Multi-Agent Applications

Get reimbursed

Bulk purchases