Mastering LLMs For Developers & Data Scientists

4.4 (176)

·

3 Weeks

·

Cohort-based Course

An online course for everything LLMs.

Course overview

Build skills to be effective with LLMs

This started as an LLM fine-tuning course. It organically grew into a learning event with world-class speakers on a broad range of LLM topics. The original fine-tuning course is still here as a series of workshops. But there are now many self-contained talks and office hours from experts on many Generative AI topics.


All materials + recordings will be available to participants who enroll. There are 11 talks and 4 workshops (and growing) in addition to office hours.


Conference Talks

------------------------------

Jeremy Howard: Co-Founder Answer.AI & Fast.AI

- Build Applications For LLMs in Python

Sophia Yang: Head of Developer Relations, Mistral AI

- Best Practices For Fine Tuning Mistral

Simon Willison: Creator of Datasette, co-creator of Django, PSF Board Member

- Language models on the command-line

JJ Allaire: CEO, Posit (formerly RStudio) & Researcher for the UK AI Safety Institute

- Inspect, An OSS framework for LLM evals

Wing Lian: Creator of Axolotl library for LLM fine-tuning

- Fine-Tuning w/Axolotl

Mark Saroufim and Jane Xu: PyTorch developers @ Meta

- Slaying OOMs with PyTorch FSDP and torchao

Jason Liu: Creator of Instructor

- Systematically improving RAG applications 

Paige Bailey: DevRel Lead, GenAI, Google

- When to Fine-Tune?

Emmanuel Ameisen: Research Engineer, Anthropic

- Why Fine-Tuning is Dead

Hailey Schoelkopf: research scientist, Eleuther AI, maintainer, LM Evaluation Harness

- A Deep Dive on LLM Evaluation

Johno Whitaker: R&D at AnswerAI

- Fine-Tuning Napkin Math

John Berryman: Author of O'Reilly Book Prompt Engineering for LLMs

- Prompt Eng Best Practices

Ben Clavié: R&D at AnswerAI

- Beyond the Basics of RAG

Abhishek Thakur leads AutoTrain at HuggingFace

- Train (almost) any llm model using 🤗 Autotrain

Kyle Corbitt is currently building OpenPipe

- From prompt to model: fine-tuning when you've already deployed LLMs in prod

Ankur Goyal: CEO and Founder at Braintrust

- LLM Eval For Text2SQL

Freddy Boulton: Software Engineer at 🤗

- Let's Go, Gradio!

Jo Bergum: Distinguished Engineer at Vespa

- Back to basics for RAG



Fine-Tuning Course

---------------------------

Run an end-to-end LLM fine-tuning project with modern tools and best practices. Four workshops guide you through productionizing LLMs, including evals, fine-tuning and serving.


Workshop 1: Determine when (and when not) to fine-tune an LLM

Workshop 2: Train your first fine-tuned LLM with Axolotl

Workshop 3: Set up instrumentation and evaluation to incrementally improve your model

Workshop 4: Deploy Your Model


This is accompanied by 5+ hours of office hours. Lectures explain the why and demonstrate the how for all the key pieces in LLM fine-tuning. Your hands-on-experience in the course project will ensure your ready to apply your new skills in real business scenarios.


The Fine-Tuning course has these guest speakers:


- Shreya Shankar: LLMOps and LLM Evaluations researcher

- Zach Mueller: Lead maintainer of HuggingFace accelerate

- Bryan Bischof: Director of AI Engineering at Hex

- Charles Frye: AI Engineer at Modal Labs

- Eugene Yan: Senior Applied Scientist @ Amazon

- Harrison Chase: CEO of LangChain

- Travis Addair: Co-Founder & CTO of Predibase

- Joe Hoover: Lead ML Engineer at Replicate

FAQ:

-------


Q: It says this course already started. Should I still Enroll?

A: Yes. Everything is recorded, so you can watch videos for any events that have happened so far, join for live events moving forward, and even learn from talks long after the conference is over.


Q: Will there be a future cohort?

A: No. We were fortunate to have so many world-class speakers. We don't think this can be replicated, so it is now a one-time-only event with all recordings available.


Q: Are you still giving out free compute credits?

A: No. Students who enrolled after 5/29/2024 are not eligible for compute credits. You will still get access to the lectures and recordings. EXCEPTION: if you enroll in the course by 6/10/2024 and use Modal by 6/11/2024, they will give you $1,000 in compute credits.

Who Is It For?

01

Data scientists looking to repurpose skills from conventional ML into LLMs and generative AI

02

Software engineers with Python experience looking to add the newest and most important tools in tech

03

Programmers who have called LLM APIs that now want to take their skills to the next level by building and deploying fine-tuned LLMs

What you’ll get out of this conference

Connect With A Large Community Of AI Practitioners

Discord with 1000+ members attending the conference.

Learn more about LLMs

Topics such as RAG, Evals, Inference, Fine-Tuning, are covered.

Learn about the best tools

We have curated the tools that we like the most. Credits for many of these tools are provided.

Learn about fine-tuning in-depth

This conference used to be a fine-tuning LLMs course. That course is still here, and takes place over the course of 4 workshops.

This course includes

4 interactive live sessions

Lifetime access to course materials

13 in-depth lessons

Direct access to instructor

Projects to apply learnings

Guided feedback & reflection

Private community of peers

Course certificate upon completion

Maven Satisfaction Guarantee

This course is backed by Maven’s guarantee. You can receive a full refund within 14 days after the course ends, provided you meet the completion criteria in our refund policy.

Course syllabus

Week 1

Aug 6—Aug 11

    Aug

    6

    Workshop 1: When and Why to Fine-Tune an LLM

    Tue 8/65:00 PM—7:00 PM (UTC)

    When and Why to Fine-Tune an LLM

    3 items

Week 2

Aug 12—Aug 18

    Aug

    13

    Workshop 2: Fine-Tuning with Axolotl (guest speakers Wing Lian, Zach Mueller)

    Tue 8/135:00 PM—7:00 PM (UTC)

    Fine-Tuning with Axolotl

    4 items

Week 3

Aug 19—Aug 25

    Aug

    20

    Workshop 3: Instrumenting & Evaluating LLMs (guest speakers Bryan Bischof, Shreya Shankar, Eugene Yan)

    Tue 8/205:00 PM—7:00 PM (UTC)

    Instrumenting and Evaluating LLM's for Incremental Improvement

    3 items

Week 4

Aug 26—Aug 27

    Deploying Your Fine-Tuned Model

    3 items

Post-course

    Aug

    27

    Workshop 4: Deploying Fine-Tuned Models (Guest speaker Travis Addair, Joe Hoover, Charles Frye)

    Tue 8/275:00 PM—7:00 PM (UTC)

4.4 (176 ratings)

What students are saying

Meet your instructors / conference organizers

Dan Becker

Dan Becker

Chief Generative AI Architect @ Straive

Dan has worked in AI since 2011, when he finished 2nd (out of 1350+ teams) in a Kaggle competition with a $500k prize. He contributed code to TensorFlow as a data scientist at Google and he has taught online deep learning courses to over 250k people. Dan has advised AI projects for 6 companies in the Fortune 100.

Hamel Husain

Hamel Husain

Founder @ Parlance Labs

Hamel is an ML engineer who loves building machine learning infrastructure and tools 👷🏼‍♂️. He leads or contribute to many popular open-source machine learning projects. His extensive experience (20+ years) as a machine learning engineer spans various industries, including large tech companies like Airbnb and GitHub.

Hamel is an independent consultant helping companies operationalize LLMs. At GitHub, Hamel lead CodeSearchNet, a large language model for semantic search that was a precursor to CoPilot, a large language model used by millions of developers.

A pattern of wavy dots

Be the first to know about upcoming cohorts

Mastering LLMs For Developers & Data Scientists

Course schedule

4-6 hours per week

  • Tuesdays

    1:00pm - 3:00pm EST

    Interactive weekly workshops where you will learn the tools you will apply in your course project.

  • Weekly projects

    2 hours per week

    You will build and deploy an LLM as part of the course project. The course project is divided into four weekly project.


    By the end, you will not only know about fine-tuning, but you will have hands-on experience doing it.

A pattern of wavy dots

Be the first to know about upcoming cohorts

Mastering LLMs For Developers & Data Scientists