RAG and LLM Fine-Tuning
Hosted by Dan Becker and Hamel Husain
Go deeper with a course
What you'll learn
Learn when to use each approach
Integrate Fine-Tuning with RAG
Why this topic matters
You'll learn from
Dan Becker
Chief Generative AI Architect @ Straive
Dan has worked in AI since 2011, when he finished 2nd (out of 1350+ teams) in a Kaggle competition with a $500k prize. He contributed code to TensorFlow as a data scientist at Google and he has taught online deep learning courses to over 250k people. Dan has advised AI projects for 6 companies in the Fortunate 100.
Hamel Husain
Founder @ Parlance Labs
Hamel is an ML engineer who loves building machine learning infrastructure and tools 👷🏼♂️. He leads or contribute to many popular open-source machine learning projects. His extensive experience (20+ years) as a machine learning engineer spans various industries, including large tech companies like Airbnb and GitHub.
Hamel is an independent consultant helping companies operationalize LLMs. At GitHub, Hamel lead CodeSearchNet, a large language model for semantic search that was a precursor to CoPilot, a large language model used by millions of developers.
Previously at Google and Microsoft
By continuing, you agree to Maven's Terms and Privacy Policy.
By continuing, you agree to Maven's Terms and Privacy Policy.