LLM Fine-Tuning for Data Scientists and Software Engineers

New
·

4 Weeks

·

Cohort-based Course

Train, validate and deploy your first fine-tuned LLM

Previously at

Google
Microsoft
GitHub

This course is popular

16 people enrolled last week.

Course overview

Build skills now so you can focus your work on LLMs and stay ahead of the field

Most software engineers and data scientists talk about LLMs, but few have the hands-on knowledge to train, validate and deploy fine-tuned LLMs for specific problems.


This course takes your skills (and career) to the next level as you run an end-to-end LLM fine-tuning project with the latest tools and best practices.


Four workshops guide you through the successive steps as you run your own end-to-end fine-tuning project.


Workshop 1: Determine when (and when not) to fine-tune an LLM

Workshop 2: Train your first fine-tuned LLM with Axolotl

Workshop 3: Set up instrumentation and evaluation to incrementally improve your model

Workshop 4: Deploy Your Model


Lectures explain the why and demonstrate the how for all the key pieces in LLM fine-tuning. Your hands-on-experience in the course project will ensure your ready to apply your new skills in real business scenarios.

Who Is It For?

01

Data scientists looking to repurpose skills from conventional ML into LLMs and generative AI

02

Software engineers with Python experience looking to add the newest and most important tools in tech

03

Programmers who have called LLM APIs that now want to take their skills to the next level by building and deploying fine-tuned LLMs

What you’ll get out of this course

Determine when (and when not) to fine-tune an LLM

Understand the costs and benefits of fine-tuning a model and how they vary from one problem to the next.


We'll discuss your judgments with applications to specific use cases.

Build your first fine-tuned LLM with Axolotl

Axolotl builds in best-practices for faster and more reliable fine-tuning.


Gain experience using Axolotl with guidance from an Axolotl contributor.

Set up instrumentation and evaluation to incrementally improve your model

Learn the methods for evaluating ML models. Work through where each method is applicable, and plan how to build data collection into real-world processes.

Deploy Your Model

Compare the key criteria for successful model deployment and determine which platforms will meet your needs. Then deploy your own fine-tuned LLM from the course project.

This course includes

4 interactive live sessions

Lifetime access to course materials

13 in-depth lessons

Direct access to instructor

Projects to apply learnings

Guided feedback & reflection

Private community of peers

Course certificate upon completion

Maven Satisfaction Guarantee

This course is backed by Maven’s guarantee. You can receive a full refund within 14 days after the course ends, provided you meet the completion criteria in our refund policy.

Course syllabus

Expand all modules
  • Week 1

    May 14—May 19

    Week dates are set to instructor's time zone

    Events

    • May

      14

      Workshop 1: When and Why to Fine-Tune an LLM

      Tue, May 14, 5:00 PM - 7:00 PM UTC

    Modules

    • When and Why to Fine-Tune an LLM

  • Week 2

    May 20—May 26

    Week dates are set to instructor's time zone

    Events

    • May

      21

      Workshop 2: Fine-Tuning with Axolotl (guest speaker Wing Lian)

      Tue, May 21, 5:00 PM - 7:00 PM UTC

    Modules

    • Fine-Tuning with Axolotl

  • Week 3

    May 27—Jun 2

    Week dates are set to instructor's time zone

    Events

    • May

      28

      Workshop 3: Instrumenting & Evaluating LLMs (guest speaker Bryan Bischof and Shreya Shankar)

      Tue, May 28, 5:00 PM - 7:00 PM UTC

    Modules

    • Instrumenting and Evaluating LLM's for Incremental Improvement

  • Week 4

    Jun 3—Jun 4

    Week dates are set to instructor's time zone

    Events

    • Jun

      4

      Workshop 4: Deploying Fine-Tuned Models

      Tue, Jun 4, 5:00 PM - 7:00 PM UTC

    Modules

    • Deploying Your Fine-Tuned Model

Meet your instructor

Dan Becker

Dan Becker

Chief Generative AI Architect @ Straive

Dan has worked in AI since 2011, when he finished 2nd (out of 1350+ teams) in a Kaggle competition with a $500k prize. He contributed code to TensorFlow as a data scientist at Google and he has taught online deep learning courses to over 250k people. Dan has advised AI projects for 6 companies in the Fortune 100.

Hamel Husain

Hamel Husain

Founder @ Parlance Labs

Hamel is an ML engineer who loves building machine learning infrastructure and tools 👷🏼‍♂️. He leads or contribute to many popular open-source machine learning projects. His extensive experience (20+ years) as a machine learning engineer spans various industries, including large tech companies like Airbnb and GitHub.

Hamel is an independent consultant helping companies operationalize LLMs. At GitHub, Hamel lead CodeSearchNet, a large language model for semantic search that was a precursor to CoPilot, a large language model used by millions of developers.

A pattern of wavy dots
Join an upcoming cohort

LLM Fine-Tuning for Data Scientists and Software Engineers

Cohort 1

$500 USD

Dates

May 14—June 4, 2024

Payment Deadline

May 10, 2024
|

Bulk purchases

Course schedule

4-6 hours per week
  • Tuesdays

    1:00pm - 3:00pm EST

    Interactive weekly workshops where you will learn the tools you will apply in your course project.

  • Weekly projects

    2 hours per week

    You will build and deploy an LLM as part of the course project. The course project is divided into four weekly project.


    By the end, you will not only know about fine-tuning, but you will have hands-on experience doing it.

A pattern of wavy dots
Join an upcoming cohort

LLM Fine-Tuning for Data Scientists and Software Engineers

Cohort 1

$500 USD

Dates

May 14—June 4, 2024

Payment Deadline

May 10, 2024
|

Bulk purchases

$500 USD

4 Weeks