RAG and LLM Fine-Tuning

Hosted by Dan Becker and Hamel Husain

Share with your network

What you'll learn

Learn when to use each approach

See examples of when to use RAG, when to fine-tune, and when to use both.

Integrate Fine-Tuning with RAG

How to fine-tune in ways that take advantage of RAG for optimizing style and factual consistency.

Why this topic matters

RAG and fine-tuning let you customize LLMs for a specific problem, but they serve different purposes. Each tool is effective if used correctly, but we've also seen them fail when people choose the wrong tool for their job. Understand each approach and when to use them in this short interactive workshop.

You'll learn from

Dan Becker

Chief Generative AI Architect @ Straive

Dan has worked in AI since 2011, when he finished 2nd (out of 1350+ teams) in a Kaggle competition with a $500k prize. He contributed code to TensorFlow as a data scientist at Google and he has taught online deep learning courses to over 250k people. Dan has advised AI projects for 6 companies in the Fortunate 100.

Hamel Husain

Founder @ Parlance Labs

Hamel is an ML engineer who loves building machine learning infrastructure and tools 👷🏼‍♂️. He leads or contribute to many popular open-source machine learning projects. His extensive experience (20+ years) as a machine learning engineer spans various industries, including large tech companies like Airbnb and GitHub.

Hamel is an independent consultant helping companies operationalize LLMs. At GitHub, Hamel lead CodeSearchNet, a large language model for semantic search that was a precursor to CoPilot, a large language model used by millions of developers.

Previously at Google and Microsoft

Microsoft
Google
GitHub
Kaggle

Watch the recording for free

By continuing, you agree to Maven's Terms and Privacy Policy.

© 2024 Maven Learning, Inc.