Mastering LLMs: A Conference For Developers & Data Scientists

4.5

(154 ratings)

·

6 Weeks

·

Cohort-based Course

An online conference for everything LLMs.

This course is popular

17 people enrolled last week.

Course overview

Build skills to be effective with LLMs

Note: Course registration originally included compute credits on several platforms. Registration still includes access to all videos, materials and the Discord community. But it no longer includes compute credits.


This started as an LLM fine-tuning course. It organically grew into a conference with world-class speakers on a broad range of LLM topics. The original fine-tuning course is still here as a series of workshops. But there are now many self-contained talks and office hours from experts on many Generative AI topics.


All materials + recordings will be available to participants who enroll. There are 11 talks and 4 workshops (and growing) in addition to office hours.


Conference Talks

------------------------------

Jeremy Howard: Co-Founder Answer.AI & Fast.AI

- Build Applications For LLMs in Python

Sophia Yang: Head of Developer Relations, Mistral AI

- Best Practices For Fine Tuning Mistral

Simon Willison: Creator of Datasette, co-creator of Django, PSF Board Member

- Language models on the command-line

JJ Allaire: CEO, Posit (formerly RStudio) & Researcher for the UK AI Safety Institute

- Inspect, An OSS framework for LLM evals

Wing Lian: Creator of Axolotl library for LLM fine-tuning

- Fine-Tuning w/Axolotl

Mark Saroufim and Jane Xu: PyTorch developers @ Meta

- Slaying OOMs with PyTorch FSDP and torchao

Jason Liu: Creator of Instructor

- Systematically improving RAG applications 

Paige Bailey: DevRel Lead, GenAI, Google

- When to Fine-Tune?

Emmanuel Ameisen: Research Engineer, Anthropic

- Why Fine-Tuning is Dead

Hailey Schoelkopf: research scientist, Eleuther AI, maintainer, LM Evaluation Harness

- A Deep Dive on LLM Evaluation

Johno Whitaker: R&D at AnswerAI

- Fine-Tuning Napkin Math

John Berryman: Author of O'Reilly Book Prompt Engineering for LLMs

- Prompt Eng Best Practices

Ben Clavié: R&D at AnswerAI

- Beyond the Basics of RAG

Abhishek Thakur leads AutoTrain at HuggingFace

- Train (almost) any llm model using 🤗 Autotrain

Kyle Corbitt is currently building OpenPipe

- From prompt to model: fine-tuning when you've already deployed LLMs in prod

Ankur Goyal: CEO and Founder at Braintrust

- LLM Eval For Text2SQL

Freddy Boulton: Software Engineer at 🤗

- Let's Go, Gradio!

Jo Bergum: Distinguished Engineer at Vespa

- Back to basics for RAG



Fine-Tuning Course

---------------------------

Run an end-to-end LLM fine-tuning project with modern tools and best practices. Four workshops guide you through productionizing LLMs, including evals, fine-tuning and serving.


Workshop 1: Determine when (and when not) to fine-tune an LLM

Workshop 2: Train your first fine-tuned LLM with Axolotl

Workshop 3: Set up instrumentation and evaluation to incrementally improve your model

Workshop 4: Deploy Your Model


This is accompanied by 5+ hours of office hours. Lectures explain the why and demonstrate the how for all the key pieces in LLM fine-tuning. Your hands-on-experience in the course project will ensure your ready to apply your new skills in real business scenarios.


The Fine-Tuning course has these guest speakers:


- Shreya Shankar: LLMOps and LLM Evaluations researcher

- Zach Mueller: Lead maintainer of HuggingFace accelerate

- Bryan Bischof: Director of AI Engineering at Hex

- Charles Frye: AI Engineer at Modal Labs

- Eugene Yan: Senior Applied Scientist @ Amazon

- Harrison Chase: CEO of LangChain

- Travis Addair: Co-Founder & CTO of Predibase

- Joe Hoover: Lead ML Engineer at Replicate

FAQ:

-------


Q: It says this course already started. Should I still Enroll?

A: Yes. Everything is recorded, so you can watch videos for any events that have happened so far, join for live events moving forward, and even learn from talks long after the conference is over.


Q: Will there be a future cohort?

A: No. We were fortunate to have so many world-class speakers. We don't think this can be replicated, so it is now a one-time-only event with all recordings available.


Q: Are you still giving out free compute credits?

A: No. Students who enrolled after 5/29/2024 are not eligible for compute credits. You will still get access to the lectures and recordings. EXCEPTION: if you enroll in the course by 6/10/2024 and use Modal by 6/11/2024, they will give you $1,000 in compute credits.

Who Is It For?

01

Data scientists looking to repurpose skills from conventional ML into LLMs and generative AI

02

Software engineers with Python experience looking to add the newest and most important tools in tech

03

Programmers who have called LLM APIs that now want to take their skills to the next level by building and deploying fine-tuned LLMs

What you’ll get out of this conference

Connect With A Large Community Of AI Practitioners

Discord with 1000+ members attending the conference.

Learn more about LLMs

Topics such as RAG, Evals, Inference, Fine-Tuning, are covered.

Learn about the best tools

We have curated the tools that we like the most. Credits for many of these tools are provided.

Learn about fine-tuning in-depth

This conference used to be a fine-tuning LLMs course. That course is still here, and takes place over the course of 4 workshops.

This course includes

34 interactive live sessions

Lifetime access to course materials

13 in-depth lessons

Direct access to instructor

Projects to apply learnings

Guided feedback & reflection

Private community of peers

Course certificate upon completion

Maven Satisfaction Guarantee

This course is backed by Maven’s guarantee. You can receive a full refund within 14 days after the course ends, provided you meet the completion criteria in our refund policy.

Course syllabus

Expand all modules
  • Week 1

    May 14—May 19

    Week dates are set to instructor's time zone

    Events

    • May

      14

      Fine-Tuning Workshop 1: When and Why to Fine-Tune an LLM

      Tue, May 14, 5:00 PM - 7:00 PM UTC

    Modules

    • When and Why to Fine-Tune an LLM

  • Week 2

    May 20—May 26

    Week dates are set to instructor's time zone

    Events

    • May

      21

      Fine-Tuning Workshop 2: Fine-Tuning with Axolotl (guest speakers Wing Lian, Zach Mueller)

      Tue, May 21, 5:00 PM - 7:00 PM UTC

    • May

      23

      Conference Talk: From prompt to model: fine tuning when you've already deployed LLMs in prod (with Kyle Corbitt)

      Thu, May 23, 11:00 PM - 12:00 AM UTC

    • May

      24

      Office Hours: Axolotl w/Wing Lian

      Fri, May 24, 5:00 PM - 6:00 PM UTC

    • May

      24

      Office Hours: FSDP, DeepSpeed and Accelerate w/Zach Mueller

      Fri, May 24, 6:30 PM - 7:30 PM UTC

    Modules

    • Fine-Tuning with Axolotl

  • Week 3

    May 27—Jun 2

    Week dates are set to instructor's time zone

    Events

    • May

      27

      Office Hours: Gradio w/ Freddy Boulton

      Mon, May 27, 11:00 PM - 12:00 AM UTC

    • May

      28

      Fine-Tuning Workshop 3: Instrumenting & Evaluating LLMs (guest speakers Harrison Chase, Bryan Bischof, Shreya Shankar, Eugene Yan)

      Tue, May 28, 5:00 PM - 7:00 PM UTC

    • May

      29

      Conference Talk: LLM Eval For Text2SQL w/ Ankur Goyal

      Wed, May 29, 4:00 PM - 5:00 PM UTC

    • May

      29

      Conference Talk: Prompt Engineering Workshop w/John Berryman

      Wed, May 29, 5:00 PM - 6:00 PM UTC

    • May

      29

      Conference Talk: Inspect, An OSS framework for LLM evals w/ JJ Allaire

      Wed, May 29, 8:00 PM - 9:00 PM UTC

    • May

      30

      Office Hours: Modal w/ Charles Frye

      Thu, May 30, 5:30 PM - 6:30 PM UTC

    • May

      30

      Office Hours: LangChain/LangSmith

      Thu, May 30, 8:00 PM - 8:45 PM UTC

    • May

      31

      Conference Talk: Napkin Math For Fine Tuning w/Johno Whitaker

      Fri, May 31, 4:00 PM - 5:00 PM UTC

    • May

      31

      Conference Talk: Train (almost) any llm model using 🤗 autotrain

      Fri, May 31, 5:00 PM - 6:00 PM UTC

    • May

      31

      Optional: Johno Whitaker round 2

      Fri, May 31, 6:00 PM - 7:00 PM UTC

    Modules

    • Instrumenting and Evaluating LLM's for Incremental Improvement

  • Week 4

    Jun 3—Jun 9

    Week dates are set to instructor's time zone

    Events

    • Jun

      4

      Fine-Tuning Workshop 4: Deploying Fine-Tuned Models (Guest speakers Travis Addair, Charles Frye, Joe Hoover)

      Tue, Jun 4, 5:00 PM - 7:00 PM UTC

    • Jun

      5

      Conference Talk: Best Practices For Fine Tuning Mistral w/ Sophia Yang

      Wed, Jun 5, 4:30 PM - 5:00 PM UTC

    • Jun

      5

      Conference Talk: Creating, curating, and cleaning data for LLMs w/Daniel van Strien

      Wed, Jun 5, 5:00 PM - 6:00 PM UTC

    • Jun

      5

      Conference Talk: Why Fine-Tuning is Dead w/ Emmanuel Ameisen

      Wed, Jun 5, 11:00 PM - 11:45 PM UTC

    • Jun

      6

      Conference Talk: Systematically improving RAG applications w/Jason Liu

      Thu, Jun 6, 6:00 PM - 6:30 PM UTC

    • Jun

      6

      Conference Talk: Build Applications For LLMs in Python, with Jeremy Howard & Johno Whitaker

      Thu, Jun 6, 10:00 PM - 11:00 PM UTC

    • Jun

      7

      Optional: Getting the most out of your LLM experiments w/ Thomas Capelle

      Fri, Jun 7, 5:00 PM - 5:45 PM UTC

    Modules

    • Deploying Your Fine-Tuned Model

  • Week 5

    Jun 10—Jun 16

    Week dates are set to instructor's time zone

    Events

    • Jun

      10

      Conference Talk: Slaying OOMs with PyTorch FSDP and torchao (with Mark Saroufim and Jane Xu)

      Mon, Jun 10, 9:00 PM - 10:00 PM UTC

    • Jun

      10

      Conference Talk: When to Fine-Tune? (with Paige Bailey)

      Mon, Jun 10, 11:00 PM - 12:00 AM UTC

    • Jun

      11

      Conference Talk: Beyond the basics of Retrieval for Augmenting Generation (w/ Ben Clavié)

      Tue, Jun 11, 12:00 AM - 12:30 AM UTC

    • Jun

      11

      Conference Talk: Modal: Simple Scalable Serverless Services With Charles Frye

      Tue, Jun 11, 4:30 PM - 5:15 PM UTC

    • Jun

      11

      Optional: Replicate Office Hours

      Tue, Jun 11, 5:15 PM - 5:45 PM UTC

    • Jun

      11

      Conference Talk: A Deep Dive on LLM Evaluation (w/ Hailey Schoelkepf)

      Tue, Jun 11, 9:00 PM - 9:45 PM UTC

    • Jun

      12

      Conference Talk: Language models on the command-line w/ Simon Willison

      Wed, Jun 12, 12:00 AM - 1:00 AM UTC

    • Jun

      12

      Office Hours: Predibase w/ Travis Addair

      Wed, Jun 12, 5:00 PM - 6:00 PM UTC

    • Jun

      12

      Conference Talk: Fine-Tuning OpenAI Models - Best Practices w/Steven Heidel

      Wed, Jun 12, 8:30 PM - 9:30 PM UTC

    • Jun

      12

      Optional: Fine Tuning LLMs for Function Calling

      Wed, Jun 12, 9:30 PM - 10:00 PM UTC

  • Week 6

    Jun 17—Jun 20

    Week dates are set to instructor's time zone

    Events

    • Jun

      18

      Back to Basics for RAG w/Jo Bergum

      Tue, Jun 18, 8:00 PM - 8:45 PM UTC

    • Jun

      20

      Optional: LiveStream - Lessons From A Year of Building w/LLMs

      Thu, Jun 20, 11:00 PM - 2:00 AM UTC

4.5

(154 ratings)

What students are saying

Meet your instructors / conference organizers

Dan Becker

Dan Becker

Chief Generative AI Architect @ Straive

Dan has worked in AI since 2011, when he finished 2nd (out of 1350+ teams) in a Kaggle competition with a $500k prize. He contributed code to TensorFlow as a data scientist at Google and he has taught online deep learning courses to over 250k people. Dan has advised AI projects for 6 companies in the Fortune 100.

Hamel Husain

Hamel Husain

Founder @ Parlance Labs

Hamel is an ML engineer who loves building machine learning infrastructure and tools 👷🏼‍♂️. He leads or contribute to many popular open-source machine learning projects. His extensive experience (20+ years) as a machine learning engineer spans various industries, including large tech companies like Airbnb and GitHub.

Hamel is an independent consultant helping companies operationalize LLMs. At GitHub, Hamel lead CodeSearchNet, a large language model for semantic search that was a precursor to CoPilot, a large language model used by millions of developers.

A pattern of wavy dots
Join an upcoming cohort

Mastering LLMs: A Conference For Developers & Data Scientists

Cohort 1

$500 USD

Dates

May 14—June 21, 2024

Payment Deadline

Oct 5, 2024

Limited seats left! Enroll today

|

Bulk purchases

Course schedule

4-6 hours per week
  • Tuesdays

    1:00pm - 3:00pm EST

    Interactive weekly workshops where you will learn the tools you will apply in your course project.

  • Weekly projects

    2 hours per week

    You will build and deploy an LLM as part of the course project. The course project is divided into four weekly project.


    By the end, you will not only know about fine-tuning, but you will have hands-on experience doing it.

A pattern of wavy dots
Join an upcoming cohort

Mastering LLMs: A Conference For Developers & Data Scientists

Cohort 1

$500 USD

Dates

May 14—June 21, 2024

Payment Deadline

Oct 5, 2024

Limited seats left! Enroll today

|

Bulk purchases

$500 USD

4.5

(154)

·

6 Weeks